Extension Guide
This document is intended for developers that wish to extend the out-of-the-box functionality of imports. Some example use cases:
- Allow a new top-level entity to be imported
- Add new dynamic column definitions to be parsed separately (such as
allParentCategories
for a Category) - Support different import upload formats
Extending import type and import specification
The Import module provides 2 broadleaf enumerations for specifying what type of import is being performed, ImportType
and ImportSpecification
. ImportType
is used to indicate the type of data that is being imported. Import type does not necessarily require an association to an entity class, since custom imports can be written such as asset, however the import type enumeration does allow entity classes to be associated with it for convenience. ImportSpecification
is an enumeration used to indicate the format of the data that is being provided such as CSV, JSON, etc.
public class ImportType implements Serializable, BroadleafEnumerationType {
private static final LinkedHashMap<String, ImportType> TYPES = new LinkedHashMap<>();
public static final ImportType CATEGORY = new ImportType("CATEGORY", "Category", Category.class);
public static final ImportType PRODUCT = new ImportType("PRODUCT", "Product", Product.class);
public static final ImportType TRANSLATION = new ImportType("TRANSLATION", "Translation", Translation.class);
public static final ImportType ASSET = new ImportType("ASSET", "Asset");
}
public class ImportSpecification implements Serializable, BroadleafEnumerationType {
private static final LinkedHashMap<String, ImportSpecification> TYPES = new LinkedHashMap<>();
public static final ImportSpecification CSV = new ImportSpecification("CSV", "CSV");
public static final ImportSpecification JSON = new ImportSpecification("JSON", "JSON");
public static final ImportSpecification ASSET_ARCHIVE = new ImportSpecification("ASSET_ARCHIVE", "Asset Archive");
}
Configuring and extending an import processor
Custom file import processors can be configured for custom specifications and import types. You can create a FileImportProcessor
and implement the canHandle
function similarly as below to indicate that the processor is capable of handling the specific import type for the given file type. Then in the calculateTotalRecords
function you will need to calculate the total number of records to be imported, and in the processFile
function you have access to the uploaded file and you can perform a custom import however you would like.
public class MyFileImportProcessor implements FileImportProcessor {
public boolean canHandle(File uploadedFile, ImportSpecification importSpecification, ImportType importType) {
boolean isXMLSpec = ImportSpecification.XML.equals(importSpecification);
boolean isOfferImport = ImportType.OFFER_CODE.equals(importType);
return isXMLSpec && isOfferImport;
}
public void processFile(File uploadedFile, Process process, ImportContext context) throws ImportException {
//do work
}
public int calculateTotalRecords(File uploadedFile) throws ImportException {
//calculate records
}
}
Configuring and extending the supported csv import file for an out-of-box import
Based on the flexible nature of how the imported file is read, any fields that need to be added to a CSV import can be simply added as a new column with the same name as the property on the associated import class. For example, say you have an extension of offer code that has a description field named description
, then a new column named description
can be added to the file and the property will automatically be set. If the name of the property is not a friendly name and you would like the name of the column to be different in the file then you can create a HeaderNameMapper
that checks to make sure that the current header it is being given is for an offer code import and is the header name that needs to be changed. Then the correct property name can be returned so that the property gets set correctly. Sticking with the description example, say that the property is called offerCodeDescription
but you want the column in the file named description
then you would create a HeaderNameMapper
as seen below:
public class MyHeaderNameMapper implements HeaderNameMapper {
public String transformHeader(String header, ImportType importType) {
if (ImportType.OFFER_CODE.equals(importType) && header.equals("description")) {
return "offerCodeDescription";
}
}
}
If there are headers in your file that you want to import that correlate with fields on offer code that you don't want sent then you can simply ignore those headers by creating a HeaderNameFilter
. With an instance of a HeaderNameFilter
simply check that the current import is an offer code import and then return the list of strings that represent the header names that should be ignored.
If custom logic needs to be ran at a value in a row then you would want to create a class that extends the BroadleafCellProcessor
. In this class you're given the class of the import that is currently executing, the header name, and the value for that header for that record. The point of this is so that the value sent can be customized and returned as the actual value you would want that cell to be instead of the value that was sent in the csv.
Additionally if logic needs to be ran at a row level, i.e. you need to know all the information about a record before custom logic can be ran, then you would create a class that extends RecordPersistencePreProcessor
. Using the provided RecordParseResult
and ImportContext
, you're able to make changes to how the current record will be persisted. For example, if inserts need to be made before the main row is added then you would create PrePersistenceRequest
s and add them to the preRequests
field on the RecordParseResult
. Since you have the RecordParseResult
you can also modify the persistType
which determines if this record is an Add
or Update
. Additionally you can modify the main request in the event that one of the field's values was dependent on another fields value therefore a BroadleafCellProcessor
couldn't be used.