Deleting Fast and Safe in ServiceNow

One of my first articles, Deleting Fast in ServiceNow, is my most popular and controversial, and for good reason. This is the last of my series on following up on my most popular articles, at least for now!

In summary of my prior article, I evaluated different delete options in ServiceNow to evaluate which was the fastest to delete records of the APIs available. I found that GlideRecord deleteMultiple running from a workflow had the best execution time overall.

The Controversy

I slammed the GlideRecord deleteRecord method pretty hard, since it was over 1,000 times slower, but I didn’t really unpack the need to sometimes not fire business rules, notifications, and workflows while deleting. The deleteMultiple option does trigger business rules by default, and all the above, however, the method setWorkflow(false) does actually work with deleteMultiple as well!

That being said, it’s typically safer to disable any OnDelete notifications, business rules and then run deleteMultiple. You may also want to consider turning off audit delete as well beforehand, otherwise you’ll have to clean up the audit table records with deleteMultiple again (unless you want the safety net).

The method deleteRecord still has it’s place when you want to delete a single, or less than a handful of records, and in some ways it can be a bit safer since it is slower to delete.

Best Practice

Before doing any mass deleting, I would strongly recommend to read ServiceNow’s KB0717791 on Mass Deletion recommendations. There is also a good resource, called Safety Tips for writing Background Scripts, which covers a lot of the common mistakes people make while doing things like deleting. If you are deleting medium to small datasets, it actually isn’t a bad idea to run it as a background script, since a rollback context is generated, which allows you to restore the data.

Exploring More

As promised I looked into some additional factors that could have a play with deleting. Namely Auditing, big fields with data, and a baseline deletion for reference.

delete speed by data type with data series of large, audit and baseline

A comparison that shows delete speed by different types of records.

Auditing Impact

For testing auditing, I had 3 small string fields similar to the baseline and just enabled table level auditing. Unsuspectingly, turning on auditing on a table drastically reduces the delete operation speed, as it has to check cascade reference rules, back up a copy of the record onto the audit table, etc. It’s almost surprising though how this impact is very linear. The deletion time is increased .03s per record processed. Goes back to show how important it is to minimize auditing unless it is absolutely necessary.

Big Fields Impact

For testing big fields, I added 3 large 4000 character limit string fields, and populated them with random data. The impact is noticeable, taking 150s longer to delete 200k records than the baseline, but overall, the linear rate increase is .0008s per record processed. From my research, this seems to boil down to the buffer pool size, which the data is cached in case of an undo while it is being deleted.

Baseline Comparison

My baseline table only had 3 small string fields, with no auditing. It took <1s to delete 10k records, and less than 10s to delete 200k records - which leaves the base speed for record deletes to happen around 1 record every 0.00005s. Mind blazingly fast! So if you want to reduce delete speed, it has more to do with the data size and options (auditing) than the count.

Advanced Deleting - Time Slice Method

Wanted to throw in a strong mention about how some tables in ServiceNow, namely sys_audit which are notorious for being large and sharded have to be handled special when it comes to deleting (and other DB operations). There is a known technique where you would step day by day, and sometimes hour by hour to delete all the records within that timeframe. This method takes advantage if the data is indexed by time/created on, and sharded/broken up by time. This way you are strategically retrieving and accessing data in sequence, and removing it surgically. I could probably write a full article on the algorithm - feel free to comment if interested!

Parting thoughts

It’s good to be curious, and see how far we can push the needle. I wanted to leave with the fact that there are even more aspects to explore.

  • Indexing - Typically after data is deleted, the indexing data is not automatically re-processed. This sometimes can lead to the index portion being bigger than the actual table.

  • Table growth monitoring is a good practice. There is a self service catalog item in ServiceNow Support site to pull the top 20 or more tables on your instance. This is a good thing to check regularly. There is also a technology ServiceNow might be releasing more widespread in the future called Instance Observer which has some capabilities for table growth monitoring.

  • MariaDB explains the complexities of big deletes on this KB page, https://mariadb.com/kb/en/big-deletes/. While I think this is really good info, a lot of it does boil down to what options and decisions ServiceNow made in terms of their database configuration, to really let you optimize deletes to the max. Some answers we may never know (unless you work for ServiceNow).

Undocumented: Table Extension APIs

There are many undocumented ServiceNow APIs that have existed since the beginning. These are a couple that you may come across needing either as a system administrator or developer.

The first is the GlideTableParentChange if you need to move a table to be extended from another table. An example use case is that you may have accidently created it under the wrong table, like the wrong CI or task table.

var table = ''; //table you want to extend var old_parent = ''; var new_parent = ''; //the table you want to extend to, like cmdb_ci or task var changer = new GlideTableParentChange(table); changer.change(old_parent, new_parent);

The second is GlideDBUtil.promoteColumn which is a single one line script that lets you move a column up to the parent level table. This also might be useful in CMDB management, where you may have created a column under a table, but it really need to be moved to the parent so more CIs can use the field.

GlideDBUtil.promoteColumn('table_to_move_from', 'table_to_move_to', 'field_to_move', true);

Finally there is a documented TableUtils API that also offers some useful functions around dropping tables.

https://docs.servicenow.com/bundle/rome-application-development/page/app-store/dev_portal/API_reference/TableUtils/concept/c_TableUtilsAPI.html?cshalt=yes

Monitoring Series: Database Size Report

ServiceNow instance performance is a cause of concern and notice for any ServiceNow admin. To get ahead of the curve, it is a good idea to keep an eye on your instance database size.

Database size, the sum of all the tables and table index data stored in the instance database server. It’s easy to ask ServiceNow directly or fill out their self service form on the Support page, but even easier to just make a report and put in on a dashboard yourself!

Step by Step

  1. First go to Reports > Create New

  2. Set a name like “Current Database Size (MB)”, and select the table UA Instance Info [ua_instance_state].

  3. Select type as Speedometer (you can also do single score, but it’s nice to set ranges!)

  4. On the configure tab, select Aggregation as Sum and field as “Primary database size (MB)”

  5. On the style tab, you can set the direction as Minimize, and set reasonable ranges you think your database size should fall under.

  6. Finally, you can go and add it to a dashboard.

The finished result should look something like this:

db_size_detail.PNG

This report is an excellent addition to any system administrator’s dashboard.