WSO2 Internship - IV
In this series of posts, I provide some key points and Ideas that I report to my university during my internship. I arrange these posts as a set of reports, where each report provides the summary of tasks I have done during those four weeks of my training and the experience I gained during that period. This post is the fourth post in the series of posts about my internship @WSO2.
These four weeks, I mainly worked towards designing dashboards, the third and last milestone of my project. I had followed the following order while developing the dashboard; first prepare the spark scripts required to display the analytics related to the HL7. Then I develop the dashboard with existing gadgets using gadget generation wizards. Finally, I started work on the custom dashboard. For all, I tried following the ESB analytics, which has nearly similar aspects to my scenario case. As per the end of these four weeks, I have merely completed the dashboard’s summary parts with some fixed regarding chart zooming, and charts values are inversely displaying. And working on the search part based on the bam’s management console.
These are attributes used in Event Stream, in addition to usual meta, correlation, and payload data. These are specific for different scenarios; for HL7, we can obtain more than 600 Arbitrary attributes. We can get this through Event Stream in DAS, store it in the event store, and use it for analysis with Spark Scripts. When using these specific HL7 arbitrary attributes, I faced issues in SELECT and INSERT in queries. For explicit references, look at the below link.
Things I Learned
When analyzing with Spark scripts in WSO2 DAS, we may need to do analysis based on repetitive time, which means analysis data periodically to get historical stats like PER MINUTE, PER HOUR, PER DAY, PER MONTH analysis. For this we may need to go throughout all data, to avoid that we can add up the incremental parameter with, WINDOW and TIMES. Consider If you want to analyze data by PER HOUR using PER MINUTE table, we can set the incremental parameter in PER MINUTE as the table which contains PER MINUTE based analysis contents, and add Window windows are one step above the level of the time unit, for this case it is HOUR. and the third parameter is optional which is the number of records we need to look back. By default, it is 1 if we not set it. It means it will go to the last record of the particular table (PER HOUR) and update it based on the PER MINUTE table by taking the values which are after the set Incremental Parameter Value and do analysis. By this we can omit the redundant data analysis ,and increase the efficiency of data analysis in a large block of data.
[NOTE THIS NOT DEPRECIATED, BUT I USED IT JUST AS A CONTROLLER, INSTEAD OF THIS, THEY ARE NOW USING UUF]
/repository/deployment/server/jaggeryapps/portal/controllers/apis/ and when we used for URL we often use
portal/apis/nameofjaggerycontroller by this we can give ajax request and which will direct to the
jaggerycontroller in server-side, which get the request and fetch the data from datastore based on the requirement by the ajax request and give the response back to the client, then client-side callback function will execute actions on received response base on its function in the case of success, and in case of failure it will direct to error case function. This how a jaggery controller is used to fetch the details from the database in the server by the client-side application.