Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
jamiePower
Advisor
Advisor

Introduction


Custom widgets in SAP Analytics Cloud (SAC) enable developers to inject JavaScript code directly into an SAC story, adding interactive and customized functionalities. In this blog post, we will dive into the world of custom widgets in SAC, explore how React can supercharge your widget development, and introduce you to a useful example of how we've used React to build upon the Data Import API: the File Upload Widget.

File Upload Widget


The file upload widget extends upon the Data Import API to allow users to upload their FactData CSV or Excel datasets to public and private versions. The widget serves as an intermediary tool, parsing the file and sending the data contained within it to the Data Import API. This API will then create an ‘import job,’ which we will use to post, validate, and eventually write the data to a specified public or private . The File Upload Widget provides a user-friendly interface for customers to import their data and view the changes within the same story.

To develop the widget, we used the existing library of , which allowed us to align the style of our custom components with the existing styles throughout SAP Analytics Cloud.

Data Import API Overview


To understand the File Upload Widget, a basic understanding of the Data Import API is needed. The Data Import API empowers users to import large volumes of data into their models. The basic flow of Data Import API is as follows:

  1. The user creates a ‘job’ based on a specific model and Import Type. Import Types specify the type of data that we are importing with the current job, such as ‘Fact Data’ or ‘Master Data’. This job will then have the import type metadata of the model associated with it, such as dimensions and columns. This job is created by making a POST request to ‘/jobs/<JOB_ID>/<dataType>’, and they will then be given a ‘Job ID’. This JobID is a unique identifier for the job and will be used at later stages of the import process. At this point, the job will have a status of ‘READY_FOR_DATA.’


 

  1. The user then posts data to this job, by making a POST request to /jobs/<JOB_ID> and putting the data they want to import into the body of this POST request. This data can either be in JSON or CSV Format, but in the case of the File Upload Widget, it will be CSV as this is how the user will be uploading the data. Once the data has been successfully posted, the job will move to a state of ‘READY_FOR_VALIDATION.’


 

  1. Their data will be validated to ensure that it complies with any restrictions in place on a model, by making a request to the '/jobs/<JOB_ID>/validate'. For example, if you have specified in your model that you only want the data to be for the year 2024, but you upload some information which has a date dimension value of 202301, the row will be rejected by this validation, as it falls outside the expected date range. Once the validation has been completed, the job will move to the ‘READY_FOR_WRITE’ stage, where it can either be run, or have more data posted to it.


 

  1. The data is written to the model using ‘/jobs/<JOB_ID>/run’. This can be done when the job status is either ‘READY_FOR_VALIDATION’ or ‘READY_FOR_WRITE’. All rows which have passed validation will then be written to the model. Once the import job is finished, the status will be set to ‘COMPLETE’.


Users can also apply the following parameters to the jobs:

  • Mappings: Users can specify mappings, to map their file to the structure of the model within SAP Analytics Cloud. For example, if the model had a column called ‘StoreLocation,’ but in the file that was being uploaded, the respective column was named ‘SL’, we could map this SL column to the Store Location column in the model, telling the Data Import API ‘This is the value I want to use for Store Location’

  • Default Values: In some cases, the user might want to specify a default value for a row. This could be because our source file does not contain a column required by SAC, or their dataset is incomplete. Instead of adding in this column into the source file, we can tell the Data Import API ‘the value for column X will always be Y, unless we specify otherwise’

  • Job Settings: Some aspects of the jobs can be customized, such as:

    • Import Method: Whether we want to update or append to existing rows. For example, if there is a row in a model with a measure value of 1, and we update a row with the same keys, but a measure value of 2, then this will update the measure value in the model to be 2. If we append instead, the measure values will be added together, making the row have a value of 3. Note here that ‘Append’ is only available on Public Versions of Fact Data Imports.

    • Execute With Failed Rows: Here, we can specify if we want to continue the import job if there are rows that have failed in the job. This will be true by default.

    • Ignore Additional Columns: In some cases, our source file might contain information that is not necessary in the SAC model. This setting allows us to ignore this column and import the rest of the data. By default, it will be set to False.

    • Pivot Settings: We might want to upload pivoted data to SAC, which can be done using the Data Import API. Here, we can specify our pivot settings, such as the pivot start, pivot key and pivot value.

    • Date Format Settings: In some cases, there might be a difference between the date format in our source file, and the date format in the SAC Model. This setting allows the Data Import API to handle this conversion, rather than manually editing the data for every row in the file.




All service client logic is handled by a class called DataImportServiceApi. This class uses a singleton pattern so that the Story and Builder widgets both have their own instance of the service client using getInstance() and they both consume the widget properties that we persisted to the story, like the model ID, mappings and default values.

 

Custom Widget Components


Custom Widgets are built on two main components – the JSON definition file, and the resource files.

The JSON Definition file outlines the metadata for the widget. It communicates to the rest of the story what functionality, such as properties, events, and methods, are contained within this widget, as well as outlining some technical properties, such as web components.

The notable aspects to this file are:

  • Web Components: Web Components outline the different components that are used in the widget. In this case, there are two possibilities: the builder panel and the story Each component will have the following properties:

    • Kind: The kind of component that it is, which will indicate where to render

    • Tag: The tag that will be applied to the Custom Element

    • URL: The URL of the resource file

    • Integrity: A SHA string that validates the integrity of the widget file. This is generated using our Python build script, which will open the bundle file as binary, and hash this info.



  • Events: Events are ways of the File Upload Widget letting other aspects to the story know that an event has happened. In this case, two notable events are ‘onSuccess’ and ‘onFailure,’ which will be triggered if an import job has completed successfully or failed to We can then use these events to trigger other actions in the story, such as ‘click a button once the import is complete’

  • Methods: We have defined three methods within the widget that can be called from other components within the SAC story. These are:

    • Open: Opens the widget dialog

    • getTotalJobRowCount: gets the number of rows attempted to be uploaded in the last job

    • getJobFailedRowCount: gets the number of rows that failed to be imported from the last job.



  • Properties: These are properties that are specified by an admin user in the builder panel, and then used in the story widget to upload the data. These properties include:

    • ModelID: The modelID that we want to import the data to

    • ImportType: Type of data we want to import

    • Mappings: Any mappings we want to apply between the Users source data, and their SAC Model Structure.

    • DefaultValues: Any default values that an admin user wants to apply if no data is specified in the body

    • JobSettings: Any Job Settings that are used on the import jobs




The resource file contains all the JavaScript code for the logic of the widget itself. In traditional widget development, the user creates and populates one resource file for each of the web components. Then, they specify the URL value in the JSON to point to somewhere where this resource file is hosted. In the case of the File Upload Widget, the bundle file generated by React will be used as the resource file for all web components. The difference is that we pass in a prop to distinguish what code we want to render. This will be explained properly in the render process.

Setup and Installation


Integrating the Custom Widget into your story comes with two steps, uploading the widget to SAC and installing the widget into your story.

To upload the widget into SAC, go to SAC Homepage -> Side panel -> Analytical Applications -> Custom Widgets. Here, you will see an option to ‘Create’ a new custom widget. Click this, and you will be prompted to upload both the JSON Definition file, and the resource file. Click ‘OK’, and your widget will be uploaded to SAC.

 


Figure 1: Uploading Widget to SAC


Once the widget has been uploaded, navigate to your SAC story. Here, inside the Edit container of the story tool bar, click insert then navigate to Custom Widgets. Your newly uploaded widget available for selection.

 


Figure 2: Adding Widget to Story


Once added to the story, the widget is available for use


Figure 3: Widget inside Story page 


 

Development Configuration


We can use local React development servers to streamline the development of these file upload widgets. Here, instead of uploading the resource file to SAC, we are hosting it on our own machine. Then, we update the ‘FileUploadWidget.json’ to upload a new ‘LocalHost’ version of the widget. In this JSON file, the ‘url’s of our web components point to this local server, which means that every time the web component is rendered, it will look to our local React server to get the bundle file.

 
  "webcomponents": [
{
"kind": "main",
"tag": "com-sap-file-upload-widget",
"url": "http://localhost:5173/dist/file-upload-widget.mjs", // local service
"integrity": "",
"ignoreIntegrity": true
},
{
"kind": "builder",
"tag": "com-sap-file-upload-widget-builder",
"url": "http://localhost:5173/dist/file-upload-widget.mjs", // local service
"integrity": "",
"ignoreIntegrity": true
}
]

Figure 4: Uploading Development Configuration


The benefit of serving/configuring the widget in this way is that it allows us to test and debug our widget using browser Devtools, so we can test the functionality and behaviour of our UI elements. We can navigate to the URL where the scripts are hosted and set breakpoints in the code to test, and we also receive the benefit or near instant updates and hot reloading of the content as we make changes in our development environment.


Figure 5: Debugging on Localhost


An alternative way of debugging that will grant you the context of an SAC story is when debugging from the Story itself; this is particularly useful when you want to integrate with any services within SAC or deployed on Cloud Foundry as you can inherit the Authentication in SAC from the cookies allowing you to call APIs from the widget and perform validation and testing.

 


Figure 6: Debugging inside an SAC Story 


 

Business and Admin Use Case


Here, we will outline the different flows that can be undertaken by either an admin user, or a business user.

Admin User

The Builder Panel of the widget captures all the configuration functionality within a side panel of the SAC story. The idea is that we want the flow for the end user to be as seamless as possible. Therefore, the admin user can select which model to import to, whether to import to a public or private versions, configure any data mappings or default values, and any job settings. This information will be saved to the story, allowing it to be used by the story component.

 


Figure 7: Builder Panel


 

 

Business User

Once configured, the File Upload Widget can be triggered by a business user in Story View mode (Note: Not applicable in 'Edit Mode'). The user selects an Excel (xlsx) or CSV file for upload, and the widget parses and validates the structure of the file. Once this small client-side validation is completed, the widget will send the data to the Data Import API, which will perform validation of the data itself, ensuring that any filters or formats specified in the model are respected. After completion, the user can check the job status, and download a CSV file containing rejected or failed validation records.

 


Figure 8: File Upload Dialog 


 

Render Process


This section outlines how the react code is rendered within SAC.

  1. The Custom Widget JSON defines two web components – the builder and the story widget. These web components specify the resource file, which in both cases, is the bundle.js file that has been built by React.

  2. This bundle.js file runs the code that is specified in the ‘index.js’ file. This code will add two CustomElements to the window, which will have the tags specified in the JSON definition. These custom elements will have a JavaScript component associated with it, which in this case are the ‘Story Widget’ and ‘Builder Widget.’

  3. Both components have got certain properties associated with them, most notable of which is the ‘mode’ property. This property specifies whether we are in the ‘builder’ panel, the ‘styler’ panel, or the story panel. These JavaScript files also contain a ‘render’ function, which will render the react app, with the props associated in the file. In the builder panel, the app will render in ‘Builder’ mode, meaning it will render the builder components. In the End User Dialog, the app will render in ‘Story’ mode, meaning it will render the dialog to upload the data.


 

Persisting Data Between Components


 

Data is shared between components by persisting a ‘settings’ variable to the story. These settings are attached to the story, in much the same way that ‘localStorage’ attaches properties to a browser window. This will allow admin users to make changes to the widget settings in the builder panel, and have these settings be used when a business user is using the widget.

To update a value in the settings, we use the ‘onCustomWidgetAfterUpdate’ function. This takes in an argument of ‘changedProperties,’ which is a stringified JSON object containing key / value pairs of any objects that have changed since the last time a user saved their story. The keys will be the value that has changed, and the value will be the new value. Then, we update the value of this property on the component. Finally, we update the settings object by calling ‘updateSettings().’ This function creates a new JavaScript object, containing all the current properties of the JavaScript component, and persists a stringified version to the story.

Here is an example of this flow, where an admin user updates the import type from ‘factData’ to ‘privateFactData.’

  1. User opens the builder panel and opens the ‘Import Type’ section. Here, they will be shown a dropdown containing ‘Fact Data’ and ‘Private Fact Data.’ Currently, it is set to ‘Fact Data,’ so the user selects ‘Private Fact Data’ and saves the widget.

  2. Once the user saves the data, and a change has been detected, the ‘onCustomWidgetAfterUpdate’ function will run. This function takes in a ‘changedProperties’ argument, which will look like: {importType: privateFactData}.

  3. This function will look through all the possible values that can be updated by a user, and check if it is undefined or not in the changedProperties object. If it is defined, the property is updated. In this case, this.importType will be updated to ‘privateFactData.’

  4. The updateSettings function will be called, which creates a JSON object of the properties of the widget, stringifies it, then persists it to the story storage.


 


Figure 9: Render workflow



Implementation


The React App uses SAP UI5 React Components, allowing us to maintain visual clarity with the rest of the SAC application. Extensive documentation on these components, and more details on how to use them, can be found here: UI5 Web Components

Builder Panel


Model Selector


ModelSelector.jsx

The Model Selector is a Combobox (Text box and Dropdown in one), which is populated by making a call to the /models endpoint from the Data Import API. This request returns a list of models available for import for your given SAC User along with some metadata such as the Model Name and Description.

The Combobox uses live search to filter the list of Models by name, description, and ID. Once a model is selected, the Combobox fires an onSelectionChange event which will set the ModelID to the state. This event will trigger a React Effect, which will make a call to /models/{modelID}/metadata, that returns a JSON Object that contains the column metadata for that given model. This metadata contains the names and datatypes of the columns which are used later for Mappings and Default Values.

When the widget is saved, the ID of the selected model is persisted to the ‘settings’ variable in the story, as modelId


Figure 10: Model Selector


Once a Model has been selected, the Import Type Selector will be enabled.

 

Import Type Selector


ImportTypeSelector.jsx

Below the Model Selector is the Import Type Selector, allowing users to choose which import type they want to upload to, be it FactData, or PrivateFactData. Upon selecting an import type, the Mappings, Default Values, and Job Settings components become enabled.

 


Figure 11: Import Type Selector 


 

Data Mappings


MappingSelector.jsx

The Mappings section is made up only of a button which triggers a dialog. We handle mapping definitions in two ways:


Figure 12: Data Mappings  


 

 

  • Template File Upload


To provide mappings, a user may upload a CSV/Excel file which represents a sample of the real dataset or even just a single row which contains the columns. The first row of the file is parsed and mapped to a list of strings. In the case of an excel file, we take the file headers from the first sheet by default, but also provide a dropdown to allow a user to trigger a different upload if necessary.

 





Figure 13: Fuzzy Search Mapping Inference  


 

The MappingSelector will use a node module called to perform a fuzzy search on the Column Metadata for each column in the uploaded file. An instance of the Fuse Search will be created, with the file headers passed as parameters. Then, we specify a ‘fuzzySearch’ function, which takes a column name from the metadata as an argument, and searches to see if any of the columns in the file headers are a good match. It determines whether two columns match or not by applying a ’threshold of confidence’, essentially scoring the match. If it is below 0.34 (in this case, lower is better), then the dropdown box will be populated with the file header as it appears in the user's source. If no mapping is found to be a good enough match, then the dropdown is left blank by default. Either way, users can then select what mappings they want to apply from the dropdown.

 

  •   Raw Text Input


If the user prefers, they may set the mappings for the import job using raw text inputs. Each column from the metadata is paired with an input field, like so:

 


Figure 14: Raw Text Input Mapping  


 

Once the mappings are set and the dialog is closed a mappings object is created with the keys are the columns as they appear in the metadata, and the value being the file column as it appears in the user's file. Once saved, the object is persisted to the “mappings” widget property, and then, by extension, the settings attribute which is stored on the story. This object is used later for the request body in the creation of the Import Job, and looks like this:
"Mapping": {
"ItemNumber": "ITEM NUMBERS",
"Price": "ITEM PRICE"
}

 

Default Values


DefaultValuesSelector.jsx

Default Values are values that will be used in case no value is present in the row. They are useful in the case that the dataset has repeating values for a column, a column is not included in the users file, or some rows in the file do not have a value set for them. The values will be set in the ‘DefaultValues’ dialog, and then will be sent to the Data Import API on creation of the job. Like the mapping's component, we render a list of Label and Input pairs for each column found in the metadata object we fetched from the Data Import API.

 


Figure 15: Default Values Dialog  


 

Once the fields are set, the properties are saved in the state. When the story is saved, the default values are persisted to the widget properties as defaultValues and later used in job creation, like so:
"DefaultValues" : {
"Version": "public.Actual",
"Date": "202303"
}

 

Job Settings


JobSettingsSelector.jsx

Job Settings are parameters we define on Job creation to affect the behaviour of an import. We expose these settings using the following controls:

 


Figure 16: Job Settings Panel  


 

More details about these settings can be found above in the Data Import API Overview.

 

Based on the selection from the client we construct the Job Settings Object. We define these properties in the POST body sent along with Mappings and Default Values sent to the /models/{modelID}/{importType} endpoint.

 
{
"pivotOptions": {
"pivotKeyName": "ItemNumber",
"pivotValueName": "Price",
"pivotColumnStart": "3"
},
"dateFormats": {
"Date": "YYYYWW"
},
"executeWithFailedRows": true,
"importMethod": "Append"
}

 

Jobs Timeline


The jobs timeline offers a chronological history of previously created Import Jobs. We make a call to the /jobs endpoint, which will give us a list of all import job records. Then, we sort them based on the objects lastUpdatedTime property. Finally, we filter the list of objects by the Model ID that is currently selected, giving us a timeline of the previously executed import jobs for a specific model.

Within each panel of the timeline, we expose two buttons to the user:

  • Copy JSON Stats: Copies a JSON Object which offers useful stats about the Job

  • Copy Invalid Rows URL: Copies a string URL which points to the jobs invalid rows URL, this endpoint returns a list of rows that failed validation during import (/jobs/{JobID}/invalidRows)


Both events use the Clipboard API and call its writeText() function to set the content to the users clipboard.

 

Updating all of these properties updates the state of the Builder Panel Widget, and once we save, these values are persisted to the story.

Story Widget


The Story widget encapsulates the end user flow, which is the business user who wishes to import the data based on the settings configured in the builder panel. The story widget acts a button which triggers an event to create a dialog which supplies the interface for file upload. The React app is rendered by both the Story Panel and the Builder Panel. As mentioned before, the difference in the app is the property ‘mode,’ which is set to 'STORY’ on the story panel, and 'BUILDER’ on the builder panel.

Once we add the widget to the story, the user will be presented with the ‘End User Upload’ Component. This component serves two purposes: to open up the ‘Import Dialog’, and to indicate the status of the import job, which will take the place of the text once the import has been completed. In this text, the user will also be given a link which will allow them to download a CSV file of all their failed rows.

 


Figure 17: End User Upload Component


 

Once the user clicks the button, the ‘EndUserEntryDialog’ will be loaded. If it is an XLSX file, then a dropdown will be rendered, allowing a user to choose which sheet they want to import from. This dialog will allow the user to upload their data and indicate to them how many rows have been counted, looking like this. Using the two libraries, we are able to find out how many rows are in the CSV Data.

 


Figure 18: End User Upload Component with an XLSX file uploaded to it


 

It will also do some simple client-side validation, to ensure that the data uploaded is consistent with how the Data Import API will expect it. If the data is not in the format that we expect, for instance there are columns not present / too many columns, then we can inform the user using some simple error messages, as seen in the example below. Here, I replaced the header for the ‘Account’ column, with ‘InvalidColumnName’, so we are getting two errors. These errors are React Components, which will render a ‘Negative’ Message Strip with the message that we want to display.

 


Figure 19: End User Upload Component after a successful import of 100,000 Rows


It is important to emphasise here that the validation that is being performed here is only done on the structure of the data, and not the data itself. This is because validation on the actual data will be handled by the Data Import API. We just want to indicate to users that their data might not be in the format that we expect.

Once the data is imported, we click run. This will send the data that the user has upload to the Data Import API, where a job will be created (with any mappings, default values or job settings specified by the end user). Then, the CSV data will be posted to the job in chunks of 100,000 rows, and the data will be validated and persisted to the model.

When this job completes, there will be an indication as to how many failed rows there are. These are rows which the Data Import API deems to be invalid, and therefore will not be persisted to the model. These rows can fail at two stages, during the posting of the row to the job, or during the validation.

Rows can fail during posting to the job if there is some data that does not comply with the structure of the table. For instance, if there is an integer value that is over the maximum possible value for an integer, or a string that goes over the max length for a column, then the Data Import API will not post this row to the job, and instead return to the user that this row is invalid, in the response of this request. If there are failed rows in this response, then we add them to a ‘FailedRows’ array in the React app.

Once all the data is posted, validation against the model occurs. Here, we validate that the data follows the model preferences, such as date range, as well as ensuring that the user who is posting the data to the specific model has got permission to do so. If rows fail this stage of validation, they can be retrieved by making a request to ‘/jobs/jobID/invalidRows’. Here, a sample size of up to 2000 failed rows can be stored, including a reason that the row was rejected. These rows will then also be added to the ‘FailedRows’ array.

When the job completes, and this ‘failedRows’ object contains any failed rows from either stage, then the End User Upload component will inform that the user that the import had completed with X failed rows, and they will be able to download a CSV file containing these failures. This file will be in the same format as the users model, with an additional column containing the rejection reason.

 


Figure 20: End User Upload indicating that there are failed rows in a job that has completed.


 

If there are no failed rows, then the user will receive an indication that their data has been successfully imported:


Figure 21: End User Upload indicating the data has been uploaded successfully


 

File Parsing

There are separate implementations to handle both CSV and XLSX file uploads. Both flows are captured in FileHandler.js. The two file types that we support in the File Upload Widget are ‘.csv’ files and ‘.xlsx’ files. We use two separate libraries to handle the parsing of the data from the file to the data that’s sent to the Data Import API.

For CSV Data, we use a npm package called to read CSV Data. This package takes in two arguments, the file and a callback function. The callback function is a way for us to specify what we want to be done with the data once it’s completed, such as executing some small, client side validation.

For XLSX files, we use the ‘’ library. This does a very similar thing, and converts the excel data into CSV format, which is usable by the Data Import API. Then, we perform the same validation to ensure that the data matches the structure that we expect. A notable difference here is that in XLSX files, it’s possible for the user to have multiple sheets. As such, a dropdown will be shown to users holding all the sheet names, and selecting a different sheet name will take the data in from that sheet.

Both flows complete a callback which adds the filedata to the State. The contents of the file are then used in a POST body when calling /jobs/{jobID} to write data to a job. In the final step of the import, the widget will call /jobs/{JobID}/run using the ID of the Job it created and executes the import. The widget polls the status of the job using /jobs/{JobID}/status and once the job has status COMPLETE, the number of records imported are exposed via a label on the Story Widget.

 

Web Workers

When handling large volumes of data, the libraries that we use can be quite resource intensive. If not handled correctly, it can put a lot of load onto the browser window, leading to poor performance, or potentially crashing. To prevent this, we’ve adopted the concept of ‘Web Workers’ into the file handling functions.

The idea behind Web Workers is to be able to offload some heavy computation from the current browser window, which will yield two benefits. Firstly, the speed at which the data can be parsed increases significantly, as the libraries are not having to compete for resources with the rest of SAC. Secondly, the current SAC window will stay alert and responsive, as it’s not having to compete for resources with the Web Worker. Once the computation is done, the data will be sent back to the current window, ready for consumption.

 

The way we’ve implemented this varies from library to library. In Papa Parse, we are able to pass an argument to the options of a parser of ‘worker’, which will then handle the rest of it for us. For XLSX, we have to create our own instance of the Web Worker. To do this, we create a new Worker Object, and write the code that we want to execute in a ‘blob’. This code can then take in the file, handle it, and give us back the data that we want to use.

Because this WebWorker is completely independent from the rest of the SAC window, and therefore also separate from the rest of the React Application, we need to specify a way that we can import the XLSX library into this WebWorker. To do this, we’ve hosted the minified version of it on the Data Import API, which will be able to be loaded into the WebWorker for consumption.


Figure 22: Parsing Excel files using SheetJS and a Web Worker  


 

Extending the File Upload Widget to support Master Data


Here, we are going to explain the process of extending the File Upload Widget to support Master Data. This change will involve changing the code in certain areas and expanding it to consume different aspects of the Data Import API. We will go through the process of setting up the development environment, making the code changes, and building the widget to produce a final version.

Setting up the Local Development Environment


When developing the File Upload Widget, we are using a React Development Server. This lets us make changes to the widget and have them applied (almost) instantly, rather then having to upload the widget each time we make a change.

SAP Analytics Cloud supports integration with these development servers. When we upload a JSON file during the process of uploading a file upload widget, we specify where we want to look for the resource file. Instead of uploading this resource file to SAC, we define the URL of the development server. Here, once the SAC story is loaded, it will load the resource file from the development server.

  1. Download the ZIP file from this sample repo: File Upload Widget Repo

  2. Change into this directory, wherever you store it

  3. Run ‘NPM Install’ to install dependencies

  4. Run ‘NPM run start’ to start the local development server.


 

Uploading Development Instance of Widget


Now, a local development instance of the File Upload Widget will be hosted on a local development server. Given that this widget is built on the Vite template, it will be hosted on localhost, port 5173.

To consume this local instance of the widget within SAC:

  1. Go to SAC -> Analytic Applications -> Custom Widgets -> Add New Custom Widget

  2. Upload the ‘fileUploadWidgetDev.json’ file

  3. Create a new story, and import this FileUploadWidget


Now, you will be able to see any changes that you’re making to the Custom Widget appear in the context of the SAC story.

 

Making Code Changes


From here, we want to extend the widget to be able to import MasterData. To do this, we need to execute the following steps:

  1. Open ‘ImportTypeSelector’ and remove filter for Master Data. This will mean that when you open the Builder Panel, you will now be able to select ‘Master Data’ as an import type.

  2. Disable Version Dropdown as it’s not applicable to MasterData. If an admin user does not specify a default value for a version, and no version column is specified in the end user upload file, then the end user will be asked which version they want to upload to. This is done by setting a ‘Default Value’ for the Version Dimension.However, given that Master Data is not going to be uploaded to a version, we need to remove this dropdown. This can be done by expanding the conditions that are applied when passing the prop of ‘shouldDisplayVersionDropdown’ to the MainDialogContents. Now, it should look like:


shouldDisplayVersionDropdown={shouldDisplayVersionDropdown && defaultVersion === "" && props.importType !== "masterData"}

This means that the value can never be true if the import type selected is ‘MasterData’.

  1. Disable Date Format Settings if Master Data. Date Format Settings are not applicable to Master Data, so we want to disable them. This can be done by expanding on the conditional for the ‘Disabled’ prop in ‘JobSettingsSelector.jsx’, to be set to true if the import type is privateFactData OR masterData.

  2. Testing – Once we’ve implemented the changes, we want to test that the flows are working as we expect. To do this, we want to conduct an import of the master data and ensure that it behaves as we expect it to.


Building and Deploying New Version of the Widget



  1. Build and Deploy the new Widget
    Finally, we need to build the widget. To do this, we’ve provided a python script called ‘build-script.py’. This script provides an easy-to-use method for building the widget into the format that we expect to upload it to SAC, and apply the necessary updates to the JSON files. Details about this can be found in the README of the repo.

  2. Reupload Once the data changes have been made, we want to reupload this widget to SAC. To do this, we repeat the steps that we undertook to upload the widget in the first place.


 

Useful Links