Operationalizing Artificial Intelligence
Part 1: Operationalizing Artificial Intelligence – Challenges and Tips
Enterprises today are interested in incorporating Artificial Intelligence (AI) and Machine Learning (MI) as a part of their analytics initiatives – But there are challenges. Most machine learning models behave differently with real-world data as opposed to development environments and so a large percentage of machine learning solutions do not make it past the POC. Most of the data science workbench solutions available in the marketplace invariably fail to holistically address the enterprise-scale AI implementation challenge.
The current state of Enterprise Artificial Intelligence challenges
- AI needs data from multiple sources in different formats e.g. clickstreams, sensor logs, ERP systems, DICOM for medical imaging.
- Integrating, transforming and feeding multi-formatted data to train the models is a huge challenge which gets intensified with changing business needs
- Feature engineering is a tedious process and needs to be carried out periodically. Automating this process is difficult due to the requirement for domain expertise.
- As machine learning gets more complex with the introduction of deep learning, data scientists are continuously faced with the problem of explaining model results.
- Traditional Enterprise architectures lack the scalability to incorporate usage of specialized hardware like GPU, TPU or FPGA for machine learning
- Another hurdle is the manual effort involved in re-training models. A mechanism to automate the training process on dynamically scalable systems is the need of the hour.
- Enterprise data is scattered in multiple siloes and has tremendous challenges around quality, management, stewardship, lineage and traceability.
- Efficiency in the machine learning practice can only be attained by adequately democratizing AI, so that, data scientists can share their models and datasets with team members
- Rising concerns around data compliance and security pose a big challenge AI. Enterprises cannot tackle these challenges without a good governance foundation.
As a result of the above challenges, dynamic provisioning of storage and compute across the machine learning lifecycle does not happen and, in turn, restricts optimum resource utilization. Enterprise-grade MLOps or AIOps solutions are required to develop, deploy, monitor, manage, govern, and analyze the results of AI.
Tips to Operationalize Artificial Intelligence
Sustainable and shareable AI
- AI systems by nature exhibit a dynamic operational behavior and can only be sustained if we build in automation at the data ingestion and collection level, training and monitoring level, and infrastructure level. ML assisted data curation and automatic model retraining using containerized solutions is fast gaining traction.
- AI solutions are fundamentally designed for the experimental nature of work, with data science teams working in silos. AI calls for a new wave of democratization via easy asset (dataset, model, associated pipeline) search and download capabilities with measures on KPIs that visualize the ROI on published assets.
- Auditability and traceability of AI process has become all the more important especially due to regulatory mandates like GDPR etc.
- AI Governance also helps detect discrepancies in data distribution early, which may impact predictions and have legal implications for organizations.
- Building in governance across the AI life cycle is fundamental to operationalizing AI
- As AI is fast making its way into critical business decision making, it is very important to determine what are the influencing factors behind its black-box results
- Many compliance regulations like GDPR mandate the use of explainable information about the logic involved behind AI systems especially when sensitive information is being used by AI systems.
- A collaborative ML system based on automation is important, but, as users interact with AI systems using personal devices in real-time, there is a need for these systems to be agile and resilient. This means that AI needs to operate closer to the user to process and infer on user-specific data.
In part 2 of the series on Operationalizing Artificial Intelligence, we will be discussing the process and some tools that could be used for the purpose.