Data import

Measurements can be imported from various sources and data formats.

Note

Data points that are created during a data import will be charged to your Energy Management license.

Name

Give the data import a meaningful name.

Source

Format

Select the data format you want to import. Each data format has its own set of options to further configure the expected formatting of the data.

Custom formats
CSV

Comma-separated values from a plain text file.

JSON

JSON data from a remote server.

SQL

Records from a remote SQL database. Only available on Alcedo Inhouse.

Standardized Formats
MSCONS

Metered services consumption report message.

SDAT-CH

Standardized data exchange for the Swiss electricity marked, based on ebiX.

External APIs

This section provides connections to external APIs from energy service providers.

  • Hamburger Energiewerke

  • IWB (Industrielle Werke Basel)

  • Smart Energy Link

  • Solar21

Source

Select from where measurements should be imported. The following options are available:

Manual File Upload

This option is available for CSV, MSCONS, SDAT-CH and IWB and is for testing purposes only. Click on the button Import Now to start the data import manually and to upload the file in your web browser.

Windows Share (SMB)

Available for CSV, MSCONS, SDAT-CH and IWB. The file will be downloaded from a shared Windows directory.

FTP / FTPS / SFTP

Available for CSV, MSCONS, SDAT-CH and IWB. The file will be downloaded from an FTP server.

Warning

FTPS servers that enforce SSL session reuse are not supported.

SQL Database

Only available for SQL data. The data will be queried from a remote SQL database.

Incoming POST Request

Only available for JSON data. The JSON data to be imported will be posted from a third-party application to our public API.

The data must be posted as a list of JSON objects to the URL that is displayed in the URL field. Note that the URL is only displayed once you save your data import.

Example JSON

[
  {"systemName": "DP_1", "timestamp": "2024-01-20'T'14:00:00X", "value": 1.0},
  {"systemName": "DP_1", "timestamp": "2024-01-20'T'14:00:15X", "value": 2.0},
  {"systemName": "DP_1", "timestamp": "2024-01-20'T'14:00:30X", "value": 3.0}
]

You can map the individual properties from the imported JSON to specific properties of data points in the section JSON Configuration.

Email

Available for CSV, MSCONS, SDAT-CH and IWB. Select this option if your data files are sent as attachment via email. The system will import files from any email message that is sent to the email address shown below (see email tag).

Note that only one attachment is allowed per email, though. If the email contains multiple files, only the first one will be imported.

When an import is triggered via schedule, only new unread emails will be imported.

If you want to import already read emails again, or you want to run a manual import, click on Import Now. In the following dialog, you can select the oldest email that you want to import. The selected email and all emails that are newer than the selected one will be imported. A manual import will always import both read and unread emails.

Warning

Read emails older than 30 days and unread emails older than 90 days are automatically deleted.

URL

Only available with source FTP/FTPS/SFTP, Windows Share (SMB), SQL Database, or Incoming POST Request.

If you selected FTP/FTPS/SFTP, Windows Share (SMB) or SQL Database as your source, enter the complete URL where the data to be imported can be accessed, including protocol, port and full path.

Example POST: https://example.org/api/v1/dataimport

Example FTP: ftp://example.org:2121/home/dataimport

Example SQL: jdbc:mysql://example.org:3306

Example SMB: smb://example.org/home/dataimport

If you selected Incoming POST Request as your source, this field will show you the URL where the third-party application has to post the data to be imported. The remaining information regarding our public API can be found on here. The client ID and secret that are required to log in to our public API can be found on the Settings page by clicking on the user menu at the top right and then on Settings.

User name

Only available with source FTP/FTPS/SFTP, Windows Share (SMB) or SQL Database, or with format Smart Energy Link. The user name with which Avelon can authenticate itself.

Password

Only available with source FTP/FTPS/SFTP, Windows Share (SMB) or SQL Database, or with format Smart Energy Link. The password with which Avelon can authenticate itself.

API Key

Only available with format Solar21 or Smart Energy Link. Contains the key for accessing the API which must be obtained from the API provider.

Query

Only available with source SQL Database. The SQL query with which the data is fetched from the database.

Example: SELECT Timestamp, SystemName, Value FROM TableName;

Implicit FTPS

This option is displayed when the Source it set to FTPS. Implicit FTPS is a method that allows clients to connect to an implicit port (port 990) which already has secure connections baked in, without requesting for there to be one. Implicit FTPS makes use of a dedicated port in order to allow for port 21 to be left open. Implicit FTPS is considered much stricter when it comes to establishing a connection that is secure.

Ignore Certificate Errors

This option is only available when the Source it set to FTPS and can be used to temporarily ignore certificate errors, for example if the certificate has expired. However, it is recommended to renew expired certificates rather than ignoring the resulting issues.

Email Tag

Only available with source Email. Emails have to be sent to the email address dataimport+<tag>@avelon.cloud to be processed, where <tag> should be replaced with the value that was provided in the field Email Tag.

Test Connection

Only available with source FTP/FTPS/SFTP, Windows Share (SMB) or SQL Database. Attempts to log on to the server under the specified URL with the credentials provided.

Format (CSV)

These settings are only available for source FTP/FTPS/SFTP, Windows Share (SMB) or SQL Database.

Encoding

The file encoding of the CSV file to be imported. Not available with source SQL Database.

Row Start

Starts importing data from this row. With this setting you can skip empty rows or comments in the file. Not available with source SQL Database.

Delimiter

The delimiter character by which individual values are separated from each other. Not available with source SQL Database.

Text Qualifier

The character with which strings are enclosed. Not available with source SQL Database.

Decimal Separator

The decimal separator. Not available with source SQL Database.

First rows contains column headers

Ignores the first row of the file. Not available with source SQL Database.

Columns

Select how data is structured in the CSV file. After selecting the appropriate option, configure the columns in the section Column Configuration (see below). Note that your current column configuration will be reset when you change this option, because the two options use incompatible column types.

  • Date, System Name, Value: Each measurement to be imported is located on a separate line. The system name of the corresponding data point is included on the same line in an separate column. Despite the text of this option, the imported file can have more than three columns.

  • Date, Data Point 1, Data Point 2…: The data points are arranged as separate columns. All measurements in a column are associated with the same data point. If several data points have measurements at the same time, these values appear on the same line.

Value Type

Select whether the values to be imported should be interpreted as pure measurements or as consumption values.

Sites (Solar21)

This card is only available for Solar21 imports. You can specify multiple sites for which data should be imported. Each site consists of the following fields:

Site ID

The ID of the site from which you want to download the data.

Site Name

The name of the site for easier identification.

Test Connection

Click this button to initiate a test connection to the site.

Module Selection (Hamburger Energiewerke)

This card is only available for Hamburger Energiewerke imports. You can select multiple modules for which data should be imported.

To add a module, click on Add Module in the table header. In the dialog, open the station that contains the desired module. Then select the checkbox next to the desired module. Select as many modules as you want to include in the data import, then click on Add to add them.

Note that there’s a limit on the number of requests that can be made to the server per day. If this limit is exceeded, the API will no longer answer to any requests. The station and module names might not be displayed correctly in the user interface. In this case, wait and try again the next day.

Time zone

Set the time zone for imported timestamps that don’t contain explicit information about their time zone.

Schedule

Start Date

The date on which the import should be executed for the first time.

Start Time

The time at which the import should be executed for the first time.

Recurrence

Specifies the frequency at which the import should be carried out automatically, starting from the start time defined above.

Next Execution

Shows when the automatic data import will be executed the next time.

Import Now

Allows you to run the import ahead of time. Please note that this button is for testing purposes only. Existing measurements may be overwritten by the import.

Note

The system will remember the last time an import was executed. On each subsequent import, only files that have been modified since the last import will be imported.

Target Device

Device

The target device for the data import. Data is only imported for those data points that are found on this device based on their system name.

Warning

Imports with a schedule and imports of a proprietary file format can only be assigned to virtual devices or Beetles, and no more than three data imports are allowed on the same device with the same data format and source type.

Virtual devices can only be obtained as part of a project or by contacting Avelon sales.

Automatically create data point

If a data point is not found on the target device based on its system name, a new data point is automatically created.

Both the system name and the external ID of new data points will be set to the system name provided in the import file. In subsequent imports, the data point whose external ID or system name matches the system name provided in the import file will be updated. The external ID takes precedence to make sure that data points are updated correctly, even if their system names have changed.

Column Configuration

These settings are only available for CSV or SQL imports. You can specify which columns are to be included in the import and how they should be interpreted. Create an entry in this list for each column of your input and select the appropriate interpretation under Type.

Depending on which option you selected under FormatColumns, different options are available.

Option “Date, System Name, Value”

(Ignore)

Ignore this column during the import.

Date

Contains date information. This field is required. Use the text field next to it to specify the format, e.g. dd.MM.yyyy HH:mm:ss.

Also see section Troubleshooting.

System Name

This field is required and contains the system name (object name) that identifies the data point unambiguously on the selected target device.

By default, the external ID of data points is matched against the system name provided in the import file to identify a data point. If no matching data point was found, the system name of data points is matched against the system name from the import file. This is to make sure that data points are matched correctly, even if their system name was changed after the initial import. If no data point was found and the option Automatically create data point is checked, a new data point will be created during the import.

Warning

In the case where there’s a data point with a matching external ID and another data point with a matching system name, the import cannot clearly assign the imported value to either data point, and the import will fail with an error.

Label

Contains the data point label. This field is only required if the option Automatically create data point above is checked.

Value

This field is required and contains the measurement value to be imported.

Unit

Contains the unit of the data point. If the option Automatically create data point is checked, this unit will be used if a data point is created during the import. If provided, the unit must be one of Avelon’s supported units, e.g. kWh. If no unit is provided, new data points will have no unit.

Warning

The unit is not used to convert measurements during the import. The values will be imported verbatim, regardless of this unit or the unit of the data point.

State

Defines the state of the data point value. Supported values are W, E, V, G or F respectively (or their numeric priority 1-5, according to Metering Code Schweiz). If no value is provided in the import file, it will automatically be set to “True value”.

Option “Date, Data Point 1, Data Point 2…”

(Ignore)

Ignore this column during the import.

Date

Contains date information. This field is required. Use the text field next to it to specify the format, e.g. dd.MM.yyyy HH:mm:ss.

Also see section Troubleshooting.

Data Point

Drag a data point from the sidebar on the right to the highlighted area to the right of the dropdown in order to specify to which data point the measurements in this column should be assigned. For each column of data points in the import file, a column with that data point assigned needs to be configured here.

Preview

If Source is set to either Manual File Upload, Windows Share (SMB) or FTP / FTPS / SFTP, you can click on Preview at the bottom of the card to show a preview. The system will attempt to connect to the selected source and parse the latest file based on the column configuration provided above. For Manual File Upload, you will be asked to select a file from your computer. If the file can be parsed successfully, a preview of the resulting import will be shown below.

If Source is set to Email, you can click on Preview at the bottom of the card to show a preview. You will be asked to select one of the emails that have previously been sent to the provided email address (see Email Tag above). Select an email and click on OK. Any file attached to that email will be parsed according to the given column configuration, and the result will be shown below. You can select another email or modify the column configuration to update the preview.

JSON Configuration

These settings are only available for sources that support JSON, i.e. Incoming POST Request.

Here you can specify which properties are to be included in the import and how they should be interpreted. Create an entry in this list for each property of your input, enter the property name in the column Property, and select the appropriate interpretation under Type:

Date and Time

Contains date information. This field is required. Use the text field next to it to specify the format, e.g. dd.MM.yyyy HH:mm:ss.

Also see section Troubleshooting.

System Name

This field is required and contains the system name (object name) that identifies the data point unambiguously on the selected target device.

By default, the external ID of data points is matched against the system name provided in the import file to identify a data point. If no matching data point was found, the system name of data points is matched against the system name from the import file. This is to make sure that data points are matched correctly, even if their system name was changed after the initial import. If no data point was found and the option Automatically create data point is checked, a new data point will be created during the import.

Warning

In the case where there’s a data point with a matching external ID and another data point with a matching system name, the import cannot clearly assign the imported value to either data point, and the import will fail with an error.

Label

Contains the data point label. This field is only required if the option Automatically create data point above is checked.

Unit

Contains the unit of the data point. If the option Automatically create data point is checked, this unit will be used if a data point is created during the import. If provided, the unit must be one of Avelon’s supported units, e.g. kWh. If no unit is provided, new data points will have no unit.

Warning

The unit is not used to convert measurements during the import. The values will be imported verbatim, regardless of this unit or the unit of the data point.

Value

This field is required and contains the measurement value to be imported.

Status

Contains the status of the measurement value to be imported as a string, according to Metering Code Schweiz.

Validate payload

This feature is only available for imports with format JSON and source Incoming POST Request.

If data is imported via API, it can be hard to pinpoint errors that arise from invalid import data.

To make troubleshooting easier, we store the last received payload and provide and option to validate it on the Validate Payload card.

If data was received via API, the last received payload is displayed in the text field Last Received Payload. Click on the Validate Payload button on the right to validate it. If there are any errors, they will be displayed below the button, and you can edit the payload and validate your changes again. The changes will not be saved, but this gives you the option to test modifications of the JSON payload quickly.

Error handling

By default, an import will be marked as faulty if any errors occur during the import. You either need to fix your CSV files or the settings of your import job. On the next import, the system will try to re-import the file.

If your CSV file contains invalid lines that do not adhere to the CSV syntax and you want these lines to be ignored, you can select the Skip Parsing Errors option. This will skip any errors during the import and mark the import as successful in any case. However, we recommend you strip your CSV files from any invalid content before attempting to import them, so you do not need to skip parsing errors in the first place.

Invalid lines are (the list is not conclusive):

  • Lines that contain comments.

  • Lines that do not contain the expected amount of fields or columns.

  • Lines that contain invalid content, such as invalid numbers or dates.

Troubleshooting

If imports are faulty or incomplete, you can upload a file manually to see a list of potential issues.

For that purpose, if Source is set to Manual File Upload on your import, get a copy of the file that you want to import, then click on Upload and select that file. If Source is set to Windows Share (SMB) or FTP/FTPS/SFTP, click on Import Now on the Schedule card instead.

If there are any errors during the import, they will be displayed in a popup with their line number, error message and the part of the line that caused the issue.

Most frequent issues

  • Make sure that numbers don’t contain a thousand separator and that the decimal separator is a dot, e.g. 50000 or 1234.5.

  • Make sure that dates are given in the exact format that is specified under Column Configuration, e.g. dd.MM.yyyy HH:mm. If the imported dates are given in ISO-8601 notation, please make sure you enter the correct format string in the configuration, e.g. yyyy-MM-dd'T'HH:mm:ssX (note the single quotes around the letter T).

  • Make sure each row contains all the required fields, and that none of the fields are empty.

  • Make sure the target device contains data points with the exact system names given in the import file.