Sage 300

Introduction

The Templates installation provides you with a ready-to-use set of Views, Dashboards and Reports.

The installation (assuming the Central Point is freshly installed and empty), will consist of two steps:

  • Configuring the Data Sources for the ERP and Data Warehouse (optional) databases.
  • Importing the Templates into the Central Point.

There may be other steps including building and loading the OLAP Cubes.

Nectari DataSync

DataSync is required for Sage 300 only if you want to consolidate the data into one database so that Reports from different Companies can be viewed into the same one.

Important

Doing this is also recommended if you do not want to query the ERP database that uses Live Data, and would rather query the copy of this database.

Prerequisites

  • A version of Sage 300 supported by Sage
  • A destination database (preferably, in the same collation as your Sage 300 Database)
  • A valid version of DataSync (refer to Installing DataSync for more information)

Set the Connections

Important

Each database you want to consolidate needs its own Source Connection.

Important
  • No matter how many Sources you have, you only need oneDestination connection.
  • Time Zone has to be set to GMT when selecting a Tracking Type by Date.
Example

You should have a result similar to this for the Source connection:

Example

You should have a result similar to this for the Destination connection:

Importing the Extractions

This feature allows you to import a pre-defined template or to restore a backup that you may have done yourself with the Export feature (refer to Export an Extraction for more details).

Some pre-defined templates are already available; if you don't have access to them, please contact your partner. An example of a pre-defined template you could use is the one which defines the list of tables and fields to be synchronized to send Sage 300, X3, Acumatica, Salesforce data to the Cloud.

Important

An Extraction per Source connection you created is required to retrieve data properly.

  1. Click on one of the extractions in the list then on the Import icon located on the upper right-hand corner.
  2. In the Import Extraction window, click on the Choose a zip file hyperlink to browse to the location you saved the export .zip file or drag it directly into that window and click on Next.
Note

For Sage 300, two zip files will be provided.

  • If you are doing a consolidation, import the DS_EXTR_[Software-Version]_[Extraction-Version]_Sage 300 with refresh DS-CONSO.zip file.
  • If you only have one Company and want to replicate the database, import the DS_EXTR_[Software-Version]_[Extraction-Version]_Sage 300 with refresh DS-SYNC.zip file to perform a synchronization.
  1. On the left pane, select the type of extraction you want to perform and click on Next.
Important

If you are doing a consolidation, make sure to have the same Column Name for the Unique Identifier field.

While there is no restriction for Sage 300, it is recommended to use CPYID for the Column Name and the Company codes for the Unique Identifier field.

  1. Refer to Setup the Extraction Panel to define the extraction and click on Import to finish the process.
  1. You should have a result similar to this:

The diagram below illustrates how DataSync processes data.

Note

The Extractions window will automatically switch to the Tables window.

Refer to Add a SQL Query if you want to add SQL statements to some tables and Setup the Field Section to customize the fields (add calculation, change destination name etc.)

 

Validating and Building the Extractions

Once your extraction (source, destination connection and their related tables) is set up, the next step is to validate the setting of an extraction, before being able to run it.

The feature will:

  • Ensure that all the tables/fields exist in the source connections,
  • Validate all SQL queries or calculated fields,
  • Ensure that the data integrity in the destination connection is not affected (ex: change the table structure).

To do so:

  1. Select the extraction you want to validate and build in the list and click on the Validate and Build icon.
  2. In the new window, choose the action which best fits your needs and click on Build (Validate for Migration and Export extraction types).
Note

The choice will be different accordingly to the extraction type you select.

Example

For Synchronization / Consolidation and extraction types:

For Migration and Export extraction types:

  1. Wait for the process to be done.
Note

A Validation report window will appear to give you a quick overview on the process once it's finished. The results are displayed in the Status column and if there is an error, you will get more details by clicking on the hyperlink in the Error column which leads to Log Page.

Running the Extractions

Once your data have been validated (refer to Validate and Build an Extraction for more details), you can manually run the extraction if you want an immediate result instead of scheduling it.

  1. Select the extraction you want to run in the list and click on the Run Extraction Now icon.
  2. In the upper-right hand corner, choose the action you want to execute and the table(s) then click on Run.
Example


Note

Load (for the Migration extraction type only): Loads all data in your destination from your source.

Truncate and Load: Replaces all data in your destination with the current data from your source.

Incremental Load: Retrieves only records that have changed since your last Incremental Load and replace their corresponding records in your destination with the updated ones.

Process Deleted Records: Maximum quantity of days for the validation process to check if records have been deleted based on the last changed date. i.e. If value is set to 30 days, the system will check all the transactions that were created or updated in the last 30 days and then validate if they still exist in the source. If they don't exist anymore in the source, they will be then deleted from the destination.

  1. Wait for the process to be done.
Note

When the process is finished, the results are displayed in the Status column. If there is an error, you can view more details on it by clicking on the hyperlink in the Error column, which leads to the Log Page.

Data Source Configuration

Environments and Data Sources

Tip

The description given to a Data Source created for the first time is used throughout the environments to describe this specific Data Source.

Give a generic description for the first time (e.g. ERP Data Source, Cube Data Source) and if necessary, rename it after the first environment has been created.

The following information is needed to configure the Data Sources:

  • Database server credentials: Server name, Instance, Authentication strategy.
  • Main ERP database information: Database and schema name.

ERP Data Source

  1. In the upper-right hand corner, click on the to access the Administration section.
  2. On the left pane, select Env. & Data Sources.
  3. By default, there is already an environment called Production, which you can rename by double-clicking in the name box. Once changed, press the Enter key.
  4. In the Data Sources section, click on Add to create the first Data Source.
  5. Complete the ERP Data Source configuration. See instructions for MS SQL Server and Oracle below.

Oracle instructions follow

Datasource description:
Sage 300 Data Source
Type:
SQLSERVER
Server:
Database server of Sage 300
Database name:
Name of the Sage 300 database (beware of the case sensitivity)
Database schema name:
Create the two following entries by clicking on the icon (replace DatabaseName by the appropriate value):
DatabaseName.dbo
DatabaseName.NEC_CUSTOM_SCHEMA
Note

This second line contains the Nectari Custom Schema.

You can use a different one, but we highly recommend following this naming convention:

  • Start with NEC
  • Use all capitals
  • Separate words by an underscore
Important

Choose a unique Custom Schema name for each Environment.

Nectari schema:
Enter the chosen Nectari custom schema for the current environment
Authentication stategy:
UseSpecific
User Name:
SQL User accessing the Sage 300 database. For example, sa.
Password:
The user's password.
  1. Click on Validate then on Save to complete the configuration of the Data Source.

For Oracle Database

Datasource description:
Sage 300 Data Source
Type:
ORACLE
Server:
Name of the Oracle server
SID and Port:
SID and Port of the Sage 300 database instance
Database schema name:
Create 2 entries by clicking on the + icon:
dbo
NEC_CUSTOM_SCHEMA
Note

This second line contains the NectariCustom Schema.

You can use a different one, but we highly recommend following this naming convention:

  • Start with NEC
  • Use all capitals
  • Separate words by underscore
Important

Choose a unique Custom Schema name for each Environment.

Nectari schema:
Enter the chosen Nectari custom schema for the current environment.
Authentication stategy:
UseSpecific
User Name:
For example: system.
Password:
Password of the user.
  • Click on Validate then on Save to complete the configuration of the Data Source.

Cube Data Source

Important

Only if you install the UDM template thereafter.

In the same Environment as the ERP Data Source, create a new Data Source for the OLAP Cube.

Complete the Data Source Definition with all the appropriate information.

Oracle instructions follow

The screenshot below provides an example of this.

Server:
Database server where the Nectari OLAP For SQL Server package is installed.
Database name:
NectariCube.
Database schema name:
NectariCube.NEC_FOLDER (replace FOLDER by the folder name).
Where NEC_FOLDER (replace FOLDER by the folder name) is the schema used in the ERP Database of the same environment.
Nectari schema:
Enter the chosen custom schema for the current environment
  • Click on Validate then on Save.
  • Click on Set as Data Warehouse to define the Data Source as a Data Warehouse then enter the following information:
Database warehouse schema:
Enter the chosen Nectari custom schema again.
Use MARS during the cube loading:
Unchecked
  • Click on Validate then on Save.
Tip

Refer to Environments and Data Sources for more details about the MARS option .

For Oracle Database

Server:
Name of the database server where the Central Point is installed
SID and Port:
SID and Port of the Oracle database instance
Pooling:
Activating this option will improve performance (Refer to Environments and Data Sources for more details).
Database schema name:
NEC_CUSTOM_SCHEMA
Where NEC_CUSTOM_SCHEMA is the custom schema used in the ERP database of the same environment
Nectari schema:
Enter the chosen Nectari custom schema for the current environment.
Database warehouse schema:
After saving the data source and set it as a data warehouse, enter the chosen Nectari custom schema for the current environment.
  • Click on Validate then on Save to complete the configuration of the Data Source.

Importing Templates

For each environment, the following previously configured information will be required:

  • ERP Database Name
  • Nectari Custom Schema
  • ERP Schema

Download the Template file: TPL_9.5.XXX_Sage300.zip.

The X represents the build number of the template (use the highest available).

Running the Import Template

  1. In the upper-right hand corner, click on the to access the Administration section.
  2. In the Administration section, click on the Templates drop-down menu in the left pane.
  3. Select Import Template.
  4. Choose the specific location where the new templates will be installed and click on Next.
    Note

    Usually, the Root folder is used.

  1. In the Import Template window, click on Select files....
  2. Find the folder where you saved the Template.zip file in order to select it then click on Open.
  3. In the Data Sources Mapping screen, associate the Data Sources (ERP) listed in the Received Data Sources Description column (those from the template) with the Data Sources you previously defined in the Central Point (listed in the Current Data Sources Description column)
    • In the Received Data Sources Description column, ensure that only the Data Sources checkboxes you want to use from the template are ticked off.
    • In the Current Data Sources Description column, click on Bind a Data Source to access the drop-down list containing the existing Data Sources and click on Next.

In the next screen all of the Templates content is displayed, against what the Central Point already has.

By default, on the first install, everything will be set to Add (leave everything by default) .

  • In the case of a first installation, the first four columns will display None and Never Installed, the next three will detail the Template content, and the last three gives you the choice to Add, Update or Skip during the installation.
    Note

    In the case of an update, you can refer to Updating template for more details.

  1. Click on Next (this can take time).
  1. Once this has been completed, a window will be prompted to input the necessary parameters to create the custom objects.
  2. If more than one Environment have been created, you will see a column per Environment. You can untick an Environment checkbox, in which case the Global Scripts will not run in it.


  1. Complete the parameters, see examples below, and click on Next.
  1. After importing, an Execution Report will be produced, as shown below.
    Note

    The first section is for the ERP Data Source and the one below it is for the Cube Data Source.

    You can click on the button to see the details of each script individually. If no failures are reported, close the window.

  1. If any of the scripts failed to run, a fail icon will be displayed. Click on the fail symbol to view the Report Preview window, which shows the respective SQL script.
  • Copy this script, debug, and run it separately if needed. Users who are proficient with SQL can debug it straight in the Report Preview window and run it by clicking on the Try to rerun button.

Updating template

Important

Some considerations you must take into account before starting:

  • Making fresh backups of both the Nectari database and Central Point before doing a template update is highly recommended.
  • Check the Nectari Data Models and Nectari custom SQL objects that may have been delivered with the initial template version, as you might lose these customizations upon updating.
  • You must have a template version that matches the software version installed. If you are using Nectari 9, the template should be also 9.

When performing an upgrade of the Nectari software, it will only update the software and not the template. In other words, the existing Nectari Data Models and Views won't be affected.

Note

After a software upgrade, it is not mandatory to systematically perform a template update. A template update is useful if you have encountered problems with specific Nectari Data Models or Nectari custom SQL objects as it includes fixes.

To update a template:

  1. After having mapped the Data sources, tick the checkboxes of the objects you want to upgrade and click on Next.
    Note

    By default, no checkbox in the Update column will be ticked. If there is a new Data Model / View the Add checkbox will be ticked. Select Skip if you want to ignore it.

    Important

    If you tick the Update checkbox, it will overwrite the existing Nectari objects associated with that Data Model or connected to the others (dependencies). Please note that if any customizations have been done, they will be lost.

  1. Select the environment in which the scripts will be executed and click on Next.
  2. Complete the parameters and click on Next.
  3. In the Execution report window, If any of the scripts failed to run, a fail icon will be displayed. Click on the fail symbol to view the Report Preview window, which shows the respective SQL script.
  4. Copy this script, debug, and run it separately if needed. Users who are proficient with SQL can debug it straight in the Report Preview window and run it by clicking on the Try to rerun button.
Important

Web Browsers have updated their policy regarding Cookies and these changes must be applied to your Web Client if you want Nectari embedded into your ERP website, or use Single Sign-On (SSO). Refer to Cookie Management for more details.

Setting up Optional Fields

Optional Fields is a feature of Sage 300 that has been imported into Nectari.

For Sage 300, there are seven types of configurable Optional Fields:

Global Variable Sage 300 Table Number of Variables
@@GLOPT Account 5
@@ICITEMOPT Item 9
@@ORDOPT Order 5
@@CUSOPT Customer 9
@@GLPOSTO Posted Transaction 5
@@VENDOPT Vendor 10

To access the various Optional Fields available in Sage 300, you must first configure them using the Global Variables available in the template.

Editing a Global Variable

To edit a Global Variable:

  1. In the Administration section , click on Global Variables in the left pane.
  2. In the list, select one of the Global Variables and click on the pencil located on the right.
  3. Change the value in the Script field and click on Update.

Concrete example

If we take the Detailed Transaction report as an example, we can see that the Dimensions representing the GL Opt Field 1 Code and GL Opt Field 2 Code optional fields return a nul value.

As specified in the table above, these Fields correspond to the Account table and are defined in the @@GLOPT global variable.

In this example, we've modified the @@GLOPT1 and @@GLOPT2 global variables by following the procedure described in Editing a Global Variable.

Note

These Optional Fields can be found in the GLAMFO table.

We will use the ACCTCLASS and CASHFLOWTYPE values for the Script values.

If we refresh the Report now, we can see that the data for these Optional Fields are now available.