Case study: Proteus DMP
Turning operations into decision making automation
A data driven platform for capturing operational performance analytics. Pixelocracy analyzed existing processes, roles, result driven KPIs and created a platform that digitizes workflows in order to automate operational monitoring and insights generation.
Operational routine in the maritime industry can be critical, in terms of response time as well as performance. Through this project the goal was a twofold. From one hand, mapping selected employee journeys that hold a repetitive nature and transform them to automated digital workflows. From the other hand, getting insights very fast on decision making and optimization opportunities.
In order to achieve this goal, the project was broken down into two phases. The first phase was a “discovery and mapping” sprint of 6 weeks duration. During that time various stakeholder types were engaged through observation and interviews. At the end two key deliverables were formed: 1) key workflow diagrams and 2) a data inventory of all available types and sources of information that were appropriate to be included into the platform.
The second phase was the software development, with a duration of 8 weeks for an MVP (minimum viable product) and another 4 weeks for a full version that included real-life usage.
Although new technologies can seem intimidating, there is a big difference between applying a simple solution and risky R&D. On this case it was clear from the start that in order to automate repetitive tasks, we should capture and clasify respective repetitive information. Bellow we go through a few simple examples.
Applying simple data engineering techniques
All email addresses related to issues and inquiry management were channeled through a document database for further processing. Through simple machine learning techniques such as “Tokenization”, “Text classification”, “Rule based matching” and “Model training” for predicting the chances of accuracy, we were able to create a dashboard that automatically:
- Prioritized urgent events
- Tracked status
- Summarized account metrics
- Automated message templates
- Connected key-message information to contextual reporting
In addition those data were combined with additional information from reports or dynamic calculations, creating a new dataset. The digital procedure called “ETL (extract, transform, load)” was applied for this purpose, in order to achieve representing final data, differently from their initial multiple sources.
Removing unnecessary infrastructure layers
Also, in order to secure maintenability and cost efficiency, we relied on two key cloud services from AWS (Amazon Web Services). Raw datasets were stored in what we call “buckets” under the S3 storage service of Amazon. Avoiding complex system, network and OS maintenace was also key, that’s why tedious calculation tasks were transfered in AWS Lambdas, a serverless computing platform that you simply store code that runs once it gets called.