A Different Approach to NetSuite CSV Import

During 2020, we took part in implementation of a new subsidiary. One of the requirement was to be able to import Customer Refund records. That means, native CSV Import solution from NetSuite wouldn’t be an option as this record type is not supported.

Looking for possibilities, we decided to create our own CSV Import solution to meet the demand. The task was easy. Receive a CSV file in a single format and process the records via SuiteScript. However, we wanted this to be more flexible and more robust in case of future demand and also to have it done in a good way. Therefore, our solution was created in a way to cover much more than the requirement.

The implementation was successful and so far there are no unexpected behaviors experienced. This was a different approach to the CSV Import tool from NetSuite with its pros and cons. I decided to include the solution to this blog but was not satisfied with its capabilities. I was working on it for the last 2 weeks on my free time and now it is in a much better place than it was. More powerful and more capable. Before going into its current state, I will first explain how the initial solution was designed.

Expectation was a single user uploading a single CSV file that contains Customer Refund data. However, there were more transactions that customer was expecting to import. To be exact, Journal Entry, Customer Payment, Vendor Payment and Customer Deposit records. So, we decided to include all these record types in our solution to make the life of the user easier. Otherwise, user would need to do separate imports for each record type.

We decided to create a Suitelet that will capture the file from the user and create a Map/Reduce script for the processing. I decided to use the Papaparse library for CSV file processing not to have RegEx like /^\s*(?:'[^'\\]*(?:\\[\S\s][^'\\]*)*'|"[^"\\]*(?:\\[\S\s][^"\\]*)*"|[^,'"\s\\]*(?:\s+[^,'"\s\\]+)*)\s*(?:,\s*(?:'[^'\\]*(?:\\[\S\s][^'\\]*)*'|"[^"\\]*(?:\\[\S\s][^"\\]*)*"|[^,'"\s\\]*(?:\s+[^,'"\s\\]+)*)\s*)*$/  or like /(?!\s*$)\s*(?:'([^'\\]*(?:\\[\S\s][^'\\]*)*)'|"([^"\\]*(?:\\[\S\s][^"\\]*)*)"|([^,'"\s\\]*(?:\s+[^,'"\s\\]+)*))\s*(?:,|$)/g and still search for a piece that is not working properly. The file mapping and import options could be hardcoded in the code as it would be an easy win. Considering all the flexibility NetSuite provides through customization, this approach wouldn’t satisfy us.

Next challenge was if it would be possible to create dynamic record mapping. Meaning, if the user wants to add/remove columns to the CSV template, it should be possible without a code change. This was achieved using a JSON based mapping object that holds details for each record type. It is added to Map/Reduce script as a parameter. It brought further flexibility to have different mapping for each record type but still using the same CSV file. Even more, it is possible to have multiple mappings for a single record type if needed.

Other than the field mapping, there was a new possibility to have the import options. What if user wants to enter the text value instead of the Internal ID of a select field option? We approached this by creating import options parameter again JSON based data would be used. Initially, it was holding field type options like text, date; but the usage was expanded as we moved forward. It is now possible to:

  • Set External IDs, or not
  • Use different delimiter for the CSV file
  • Post to locked periods, or not

Another advantage of the import options is ability to decide which fields to set first regardless order of the mapping or CSV file. It is important to set the subsidiary or entity information before setting other fields. Similar for the record lines, item or account information must be set before proceeding with the other fields.

Finally, we had a solution that is proper and open to expansion. By predicting possible future requirements, it was possible to decrease development costs that could arise. We could configure the Map/Reduce script deployment anytime for the new requirements coming from the user. The only thing user needed to do is to prepare a single CSV file and upload it through the Suitelet page which would trigger the Map/Reduce script. User can now import any record type that is supported by SuiteScript and also have the capability to import more lines per file than the native CSV Import tool.

Not to add more complexity to the solution, we limited importing of the records for one subsidiary only. This was required from a security perspective. The Map/Reduce script does a check on the assigned roles of the user to make sure that the user has permission to create the record type for the subsidiary. This also brings flexibility to the user to use any assigned role to do the import. If the import is for Vendor Payment records but the CSV file is uploaded using Accounts Receivable role, Vendor Payment would still be created if the user has a role assigned that allows it for the subsidiary.

That was the solution we ended up having during the implementation of the new subsidiary. I decided to post the solution here for people to benefit from it. However, I didn’t feel like it is powerful enough to satisfy more use cases. That was 2 weeks ago. Thinking of what could be improved, I had couple of ideas. Some are now available, rest are waiting to be added. It means there is still a place for improvement. There is always a place for improvement.

I first added a custom record to the solution for queueing. Before user could upload one CSV file and if another one was needed to be added, previous import process must be finished. It is now possible to add as many files to the queue as needed.

Second functionality was to be able to import Spreadsheet (Excel) files. This option is not supported by NetSuite at all. Through the NetSuite CSV Import, users can only import CSV files with different delimiters. I believe this new option will make lives of the users easier. Besides Microsoft Excel (xlsx, xls), users can also import OpenDocument Spreadsheet (ods) files. It can be expanded to more file types as external SheetJS library supports.

Moving forward, initial solution was designed to import records for a single subsidiary. Playing around with the availability of the Excel import option, I decided to force user to have the sheet names of the Spreadsheet files as Internal IDs of the subsidiaries. User can now import a single Spreadsheet file for multiple subsidiaries. I didn’t want to use the subsidiary name because up to 31 characters are supported on the sheet name.

Last improvement as of now is to be able to create subsidiary specific Map/Reduce deployments. This is possible through the Scheduled Script I added to the solution that is acting as a queue processor. Deployment Mapping parameter holds the deployments per subsidiary or the generic deployments that can be used for the rest of the subsidiaries.

As a fact, I redesigned the Suitelet from scratch to be able to import different file types and also to initiate the Scheduled Script instead of the Map/Reduce Script.

That is the current state of our Advanced or Improved CSV Import solution. Let’s call it CSV Import 2.0. Source code and SDF Project (SuiteCloud Development Framework) data is available on our GitHub page.

If you decide to give this project a go, you will need to change the CSV_PARENT_FOLDER_ID parameter on the Suitelet script code, values for the CACHE_DETAILS object and the EMAIL_AUTHOR_ID parameter on the Map/Reduce script code. There are more configurations required to be updated on the script parameters. Please review the example Script Deployment records to have a better understanding.

After bringing the project to its current state, it is not tested intensively. Therefore, create as many use case scenarios as possible and test through all aspects before implementing it. Whether you implement it or not, I will be glad if you give me feedback on the functionality or the source code.

I mentioned previously that there is still a place for improvement. Below are the points we want to work in the future:

  • Create proper documentation that explains usage of the parameters
  • Add support for multi-select field
  • Add support for Addressbook subrecord
  • Add support for multiple line types for a single record
  • Updating of existing records
  • Mapping creator tool

 

We would like to hear your opinion if you liked this solution or if you discover something missing, something not working properly, a better option, weird coding. You can reach through the Contact Us page for any communication.