Using Groovy to fix the multiple-year data export issue

Here’s a more polished version:


Hello everyone,

Today I’d like to share with you a topic related to multi-year data export to a file.

While going through the Oracle community, I noticed that many users want to export data across multiple years, but they frequently encounter the same challenge: it is not straightforward to do this with the standard export options.

"if I run an export from Jul-19 (FY20) to Jun-21 (FY21), then data export for Jun FY20 will be equal to Jun FY20 + Jun FY21 in the export file (and the same for Jun FY21) which obviously is not correct."

 https://community.oracle.com/customerconnect/discussion/540998/multi-year-data-export-to-file

And even in the Oracle documentation, there is a prominent note highlighting this limitation:

https://docs.oracle.com/en/cloud/saas/enterprise-performance-management-common/diepm/integrations_run_104x80e42d74_106x80e435c7.html

Article content
"When Exporting data, you will get duplicate data"

I believe that due to Essbase’s underlying structure, the Start Period and End Period settings in EPBCS data export behave similarly to the @XRANGE function.

ItIt occurred to me that this issue could be addressed by using a Groovy rule.

Of course, Groovy rules are only available in the Enterprise version. If you are using the Standard version, Groovy is not supported.

How can we resolve this issue using a Groovy rule? Below is the approach I have implemented:

Step 1: First, we need to set up the integration to export the data.

Step 2: Create 2 variables called "startPer" & "endPer" with string type:

  • Note that: user will input like: Jan-25, Feb-25

Article content

Step 3: Create a Groovy rule:

Article content
/*RTPS: {startPer} {endPer}*/
String startPer = normalizePeriod(rtps.startPer.toString())
String endPer   = normalizePeriod(rtps.endPer.toString())     
List<String> monthNames = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
 //  StartPeriod
String startMonStr = startPer.split("-")[0]    
int startYear = Integer.parseInt(startPer.split("-")[1]) 
int startMonth = monthNames.indexOf(startMonStr)
 
//  EndPeriod
String endMonStr = endPer.split("-")[0]
int endYear = Integer.parseInt(endPer.split("-")[1])
int endMonth = monthNames.indexOf(endMonStr)

//Function normalizePeriod to preven user input wrong format
String normalizePeriod(String per) {
    Map<String, String> monMap = [
        "jan":"Jan","feb":"Feb","mar":"Mar","apr":"Apr","may":"May","jun":"Jun",
        "jul":"Jul","aug":"Aug","sep":"Sep","oct":"Oct","nov":"Nov","dec":"Dec"
    ]

    def parts = per.trim().split("-")
    return monMap[parts[0].toLowerCase()] + "-" + parts[1]
}



// List all period between Start and End period
List<String> periodList = []
 
// Loop from StartPeriod → EndPeriod
int y = startYear
int m = startMonth
 
while (y < endYear || (y == endYear && m <= endMonth)) {
    String per = "${monthNames[m]}-${(y % 100).toString().padLeft(2,'0')}"
    periodList << per
    m++
    if (m > 11) {
        m = 0
        y++
    }
}
 
 
// Print log for testing
println(periodList)

//Now, we are using EPMautomate to do the magic:
EpmAutomate automate = getEpmAutomate()
EpmAutomateStatus loginstatus = automate.execute('login', 'your login account here','your password here' , 'your EPBCS instance link here')

def status = automate.execute("listFiles")
def files = status.getItemsList()

///
String wrapped = periodList.collect { "{${it}}" }.join("")
String wrappedWithQuotes = "\"${wrapped}\""
println "$wrappedWithQuotes"
 
////Run the integration by each period to create file for each period:
periodList.each { per ->
 
    String fileName = "dataexport${per}.csv"
    def jsonBody = [
        "jobType":"INTEGRATION",
        "jobName":"Export FC Price",
        "periodName": "{${per}}",
        "importMode": "Replace",
        "exportMode": "Replace",
        "targetOptions":[
            "Download File Name":"$fileName",
            "Column Delimiter":",",
            "Include Header":"Yes"
        	]
    ]
    def conn = operation.application.getConnection("Run_Integration")
    def response = conn.post()
        .header("Content-Type", "application/json")
        .body(groovy.json.JsonOutput.toJson(jsonBody))
        .asString()
    println("DATA RULE EXECUTED:")
    println("Status: " + response.status)
}

sleep(100000) //This to ensure file finish exported
//list all exported files
def status2 = automate.execute("listFiles")
def files2  = status2.getItemsList()

//merge all files into 1 file
def csvFiles = files2.findAll{def f = ((String) it).trim().toLowerCase()
    f.startsWith("dataexport") && f.endsWith(".csv")}
println "List files merge: ${csvFiles}"
 
 
//Create final file to merge final
String finalFileName = "dataexportFinal.csv"
println "Final merged file: $finalFileName"
 
// 
boolean headerWritten = false
 
csvWriter(finalFileName, false).withCloseable { out ->
    csvFiles.each { rawName -> 
        String fileName = ((String) rawName).tokenize('/').last()
        println "Merging file: $fileName"
        csvIterator(fileName, ',' as char).withCloseable { reader ->
            def isFirstLine = true
 
            reader.each { String[] values ->
 
                if (!headerWritten) {
                    out.writeNext(values)
                    headerWritten = true
                } else {
                    if (isFirstLine) {
                        isFirstLine = false
                        return
                    }
                    out.writeNext(values)
                }
 
                isFirstLine = false
            }
        }
    }
}
        

 I can explain my code below:

1. Read RTPs and normalize Start/End Period

/*RTPS: {startPer} {endPer}*/
String startPer = normalizePeriod(rtps.startPer.toString())
String endPer   = normalizePeriod(rtps.endPer.toString())     
List<String> monthNames = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
 //  StartPeriod
String startMonStr = startPer.split("-")[0]    
int startYear = Integer.parseInt(startPer.split("-")[1]) 
int startMonth = monthNames.indexOf(startMonStr)
 
//  EndPeriod
String endMonStr = endPer.split("-")[0]
int endYear = Integer.parseInt(endPer.split("-")[1])
int endMonth = monthNames.indexOf(endMonStr)

//Function normalizePeriod to preven user input wrong format
String normalizePeriod(String per) {
    Map<String, String> monMap = [
        "jan":"Jan","feb":"Feb","mar":"Mar","apr":"Apr","may":"May","jun":"Jun",
        "jul":"Jul","aug":"Aug","sep":"Sep","oct":"Oct","nov":"Nov","dec":"Dec"
    ]

    def parts = per.trim().split("-")
    return monMap[parts[0].toLowerCase()] + "-" + parts[1]
}        

  • You call normalizePeriod() to:
  • monthNames is your reference list, so you can convert "Jan" → index 0, "Feb" → 1, etc.
  • Parse Start Period & End Period

2. Build list of all periods between Start and End

// List all period between Start and End period
List<String> periodList = []
 
// Loop from StartPeriod → EndPeriod
int y = startYear
int m = startMonth
 
while (y < endYear || (y == endYear && m <= endMonth)) {
    String per = "${monthNames[m]}-${(y % 100).toString().padLeft(2,'0')}"
    periodList << per
    m++
    if (m > 11) {
        m = 0
        y++
    }
}
 
println(periodList)        

  • You start from (startYear, startMonth) and loop until (endYear, endMonth) (inclusive).
  • Condition:
  • Inside the loop:
  • After each month, m++. If m goes past 11 (Dec), you:
  • periodList will be something like: ["Jan-24", "Feb-24", "Mar-24", ..., "Dec-24", "Jan-25", ...]

3. Log in to EPMAutomate and list files

EpmAutomate automate = getEpmAutomate()
EpmAutomateStatus loginstatus = automate.execute('login', 'your login account here','your password here' , 'your EPBCS instance link here')

def status = automate.execute("listFiles")
def files = status.getItemsList()        

4. Build a wrapped period string for logging

String wrapped = periodList.collect { "{${it}}" }.join("")
String wrappedWithQuotes = "\"${wrapped}\""
println "$wrappedWithQuotes"        

  • For each period like "Jan-24" You wrap it into {Jan-24}.
  • Then join them into one long string: "{Jan-24}{Feb-24}{Mar-24}...".
  • Then wrap the whole thing in double quotes: "\"{Jan-24}{Feb-24}...\"" → " {Jan-24}{Feb-24}... ".
  • This looks like something you might pass to Data Management or a job parameter that expects {Jan-24} syntax.

5. Run Integration job for each period

periodList.each { per ->
 
    String fileName = "dataexport${per}.csv"
    def jsonBody = [
        "jobType":"INTEGRATION",
        "jobName":"Export FC Price",
        "periodName": "{${per}}",
        "importMode": "Replace",
        "exportMode": "Replace",
        "targetOptions":[
            "Download File Name":"$fileName",
            "Column Delimiter":",",
            "Include Header":"Yes"
        	]
    ]
    def conn = operation.application.getConnection("Run_Integration")
    def response = conn.post()
        .header("Content-Type", "application/json")
        .body(groovy.json.JsonOutput.toJson(jsonBody))
        .asString()
    println("DATA RULE EXECUTED:")
    println("Status: " + response.status)
}        

For each period in periodList:

  • Define a CSV file name, e.g., "dataexportJan-24.csv".
  • Build the JSON body for a Data Management / Integration job:
  • getConnection("Run_Integration") refers to a Planning/EPBCS connection that you configured in the app (a REST connection pointing to DM or EPM REST).
  • You POST The JSON to start the job.
  • Log the HTTP status of the response (e.g., 200, 202).

Result: You trigger one export integration per period, each producing a separate CSV in the outbox.

6. List files again and filter exported CSVs

def status2 = automate.execute("listFiles")
def files2  = status2.getItemsList()

//merge all files into 1 file
def csvFiles = files2.findAll{def f = ((String) it).trim().toLowerCase()
    f.startsWith("dataexport") && f.endsWith(".csv")}
println "List files merge: ${csvFiles}"
 
//Create final file to merge final
String finalFileName = "dataexportFinal.csv"
println "Final merged file: $finalFileName"
 
boolean headerWritten = false        

  • Call listFiles Again, now you expect multiple dataexportXXX.csv files.
  • csvFiles = files2.findAll{...}:
  • finalFileName is the name of the merged output CSV.
  • headerWritten is a flag to ensure you write the CSV header row only once.

7. Merge all CSVs into one master file

csvWriter(finalFileName, false).withCloseable { out ->
    csvFiles.each { rawName -> 
        String fileName = ((String) rawName).tokenize('/').last()
        println "Merging file: $fileName"
        csvIterator(fileName, ',' as char).withCloseable { reader ->
            def isFirstLine = true
 
            reader.each { String[] values ->
 
                if (!headerWritten) {
                    out.writeNext(values)
                    headerWritten = true
                } else {
                    if (isFirstLine) {
                        isFirstLine = false
                        return
                    }
                    out.writeNext(values)
                }
 
                isFirstLine = false
            }
        }
    }
}
        

Here you use Planning Groovy helpers:

  • csvWriter(finalFileName, false):
  • withCloseable { ... } ensures the file is closed automatically.

Result:

  • The merged file dataexportFinal.csv contains: 1 header row (from the first file) + All data rows from all files, with duplicate headers removed.

In conclusion:

  • Reads two RTPs for the start and end period and normalizes them.
  • Builds a month-by-month list from StartPeriod → EndPeriod (e.g., Jan-24 to Mar-24).
  • Logs into EPMAutomate and checks files.
  • For each month, call a Data Integration / Data Rule via REST to export data for that period into a CSV (dataexport<Per>.csv).
  • Waits for the jobs to finish (sleep(100000)).
  • Lists all exported files, filters dataexport*.csv.
  • Merges all those CSVs into a single dataexportFinal.csv

Congrats, you found the way to export multiple years of data at one time without duplicate data.

Small trick against Oracle limitaion.

Thanks for sharing! Very informative 👏

To view or add a comment, sign in

More articles by Van Anh Do

Others also viewed

Explore content categories