Using Groovy to fix the multiple-year data export issue
Here’s a more polished version:
Hello everyone,
Today I’d like to share with you a topic related to multi-year data export to a file.
While going through the Oracle community, I noticed that many users want to export data across multiple years, but they frequently encounter the same challenge: it is not straightforward to do this with the standard export options.
"if I run an export from Jul-19 (FY20) to Jun-21 (FY21), then data export for Jun FY20 will be equal to Jun FY20 + Jun FY21 in the export file (and the same for Jun FY21) which obviously is not correct."
And even in the Oracle documentation, there is a prominent note highlighting this limitation:
I believe that due to Essbase’s underlying structure, the Start Period and End Period settings in EPBCS data export behave similarly to the @XRANGE function.
ItIt occurred to me that this issue could be addressed by using a Groovy rule.
Of course, Groovy rules are only available in the Enterprise version. If you are using the Standard version, Groovy is not supported.
How can we resolve this issue using a Groovy rule? Below is the approach I have implemented:
Step 1: First, we need to set up the integration to export the data.
Step 2: Create 2 variables called "startPer" & "endPer" with string type:
Step 3: Create a Groovy rule:
/*RTPS: {startPer} {endPer}*/
String startPer = normalizePeriod(rtps.startPer.toString())
String endPer = normalizePeriod(rtps.endPer.toString())
List<String> monthNames = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
// StartPeriod
String startMonStr = startPer.split("-")[0]
int startYear = Integer.parseInt(startPer.split("-")[1])
int startMonth = monthNames.indexOf(startMonStr)
// EndPeriod
String endMonStr = endPer.split("-")[0]
int endYear = Integer.parseInt(endPer.split("-")[1])
int endMonth = monthNames.indexOf(endMonStr)
//Function normalizePeriod to preven user input wrong format
String normalizePeriod(String per) {
Map<String, String> monMap = [
"jan":"Jan","feb":"Feb","mar":"Mar","apr":"Apr","may":"May","jun":"Jun",
"jul":"Jul","aug":"Aug","sep":"Sep","oct":"Oct","nov":"Nov","dec":"Dec"
]
def parts = per.trim().split("-")
return monMap[parts[0].toLowerCase()] + "-" + parts[1]
}
// List all period between Start and End period
List<String> periodList = []
// Loop from StartPeriod → EndPeriod
int y = startYear
int m = startMonth
while (y < endYear || (y == endYear && m <= endMonth)) {
String per = "${monthNames[m]}-${(y % 100).toString().padLeft(2,'0')}"
periodList << per
m++
if (m > 11) {
m = 0
y++
}
}
// Print log for testing
println(periodList)
//Now, we are using EPMautomate to do the magic:
EpmAutomate automate = getEpmAutomate()
EpmAutomateStatus loginstatus = automate.execute('login', 'your login account here','your password here' , 'your EPBCS instance link here')
def status = automate.execute("listFiles")
def files = status.getItemsList()
///
String wrapped = periodList.collect { "{${it}}" }.join("")
String wrappedWithQuotes = "\"${wrapped}\""
println "$wrappedWithQuotes"
////Run the integration by each period to create file for each period:
periodList.each { per ->
String fileName = "dataexport${per}.csv"
def jsonBody = [
"jobType":"INTEGRATION",
"jobName":"Export FC Price",
"periodName": "{${per}}",
"importMode": "Replace",
"exportMode": "Replace",
"targetOptions":[
"Download File Name":"$fileName",
"Column Delimiter":",",
"Include Header":"Yes"
]
]
def conn = operation.application.getConnection("Run_Integration")
def response = conn.post()
.header("Content-Type", "application/json")
.body(groovy.json.JsonOutput.toJson(jsonBody))
.asString()
println("DATA RULE EXECUTED:")
println("Status: " + response.status)
}
sleep(100000) //This to ensure file finish exported
//list all exported files
def status2 = automate.execute("listFiles")
def files2 = status2.getItemsList()
//merge all files into 1 file
def csvFiles = files2.findAll{def f = ((String) it).trim().toLowerCase()
f.startsWith("dataexport") && f.endsWith(".csv")}
println "List files merge: ${csvFiles}"
//Create final file to merge final
String finalFileName = "dataexportFinal.csv"
println "Final merged file: $finalFileName"
//
boolean headerWritten = false
csvWriter(finalFileName, false).withCloseable { out ->
csvFiles.each { rawName ->
String fileName = ((String) rawName).tokenize('/').last()
println "Merging file: $fileName"
csvIterator(fileName, ',' as char).withCloseable { reader ->
def isFirstLine = true
reader.each { String[] values ->
if (!headerWritten) {
out.writeNext(values)
headerWritten = true
} else {
if (isFirstLine) {
isFirstLine = false
return
}
out.writeNext(values)
}
isFirstLine = false
}
}
}
}
I can explain my code below:
1. Read RTPs and normalize Start/End Period
/*RTPS: {startPer} {endPer}*/
String startPer = normalizePeriod(rtps.startPer.toString())
String endPer = normalizePeriod(rtps.endPer.toString())
List<String> monthNames = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
// StartPeriod
String startMonStr = startPer.split("-")[0]
int startYear = Integer.parseInt(startPer.split("-")[1])
int startMonth = monthNames.indexOf(startMonStr)
// EndPeriod
String endMonStr = endPer.split("-")[0]
int endYear = Integer.parseInt(endPer.split("-")[1])
int endMonth = monthNames.indexOf(endMonStr)
//Function normalizePeriod to preven user input wrong format
String normalizePeriod(String per) {
Map<String, String> monMap = [
"jan":"Jan","feb":"Feb","mar":"Mar","apr":"Apr","may":"May","jun":"Jun",
"jul":"Jul","aug":"Aug","sep":"Sep","oct":"Oct","nov":"Nov","dec":"Dec"
]
def parts = per.trim().split("-")
return monMap[parts[0].toLowerCase()] + "-" + parts[1]
}
2. Build list of all periods between Start and End
Recommended by LinkedIn
// List all period between Start and End period
List<String> periodList = []
// Loop from StartPeriod → EndPeriod
int y = startYear
int m = startMonth
while (y < endYear || (y == endYear && m <= endMonth)) {
String per = "${monthNames[m]}-${(y % 100).toString().padLeft(2,'0')}"
periodList << per
m++
if (m > 11) {
m = 0
y++
}
}
println(periodList)
3. Log in to EPMAutomate and list files
EpmAutomate automate = getEpmAutomate()
EpmAutomateStatus loginstatus = automate.execute('login', 'your login account here','your password here' , 'your EPBCS instance link here')
def status = automate.execute("listFiles")
def files = status.getItemsList()
4. Build a wrapped period string for logging
String wrapped = periodList.collect { "{${it}}" }.join("")
String wrappedWithQuotes = "\"${wrapped}\""
println "$wrappedWithQuotes"
5. Run Integration job for each period
periodList.each { per ->
String fileName = "dataexport${per}.csv"
def jsonBody = [
"jobType":"INTEGRATION",
"jobName":"Export FC Price",
"periodName": "{${per}}",
"importMode": "Replace",
"exportMode": "Replace",
"targetOptions":[
"Download File Name":"$fileName",
"Column Delimiter":",",
"Include Header":"Yes"
]
]
def conn = operation.application.getConnection("Run_Integration")
def response = conn.post()
.header("Content-Type", "application/json")
.body(groovy.json.JsonOutput.toJson(jsonBody))
.asString()
println("DATA RULE EXECUTED:")
println("Status: " + response.status)
}
For each period in periodList:
Result: You trigger one export integration per period, each producing a separate CSV in the outbox.
6. List files again and filter exported CSVs
def status2 = automate.execute("listFiles")
def files2 = status2.getItemsList()
//merge all files into 1 file
def csvFiles = files2.findAll{def f = ((String) it).trim().toLowerCase()
f.startsWith("dataexport") && f.endsWith(".csv")}
println "List files merge: ${csvFiles}"
//Create final file to merge final
String finalFileName = "dataexportFinal.csv"
println "Final merged file: $finalFileName"
boolean headerWritten = false
7. Merge all CSVs into one master file
csvWriter(finalFileName, false).withCloseable { out ->
csvFiles.each { rawName ->
String fileName = ((String) rawName).tokenize('/').last()
println "Merging file: $fileName"
csvIterator(fileName, ',' as char).withCloseable { reader ->
def isFirstLine = true
reader.each { String[] values ->
if (!headerWritten) {
out.writeNext(values)
headerWritten = true
} else {
if (isFirstLine) {
isFirstLine = false
return
}
out.writeNext(values)
}
isFirstLine = false
}
}
}
}
Here you use Planning Groovy helpers:
Result:
In conclusion:
Congrats, you found the way to export multiple years of data at one time without duplicate data.
Small trick against Oracle limitaion.
Thanks for sharing! Very informative 👏