An LLM-based AI analyser

We helped a startup take their ML product from prototype to MVP.

Uzeweb was approached by a stealth AI startup. They had used Python to write a novel and exciting document analysis solution using large language models (LLMs), and needed help making it ready for public use.

The skillbase of the client's team was primarily machine learning with Python, so they had chosen Django to build an initial prototype interface. Although it was suitable for early feedback from potential users, they were looking for help making their frontend more flexible so it can grow with the product, and advice on scaling their code and platform architecture for production.

This sort of project is a perfect fit for us - we have worked with several startups with data science backgrounds, and regularly work with in-house teams to fill experience gaps and help them upskill to take the project forward themselves.

Initial investigation

We started by performing a review of the whole platform, to get the full picture of the problem space and what the prototype was doing well. We undertook an objective assessment of the current codebase, and identified where we could help the most.

We produced a brief report outlining our findings, using our experience with similar projects to flag the challenges facing the platform now and in the future.

We discussed potential options and recommended a roadmap for future work, with a feature breakdown ranked in terms of importance and return on investment. In particular, we proposed a high-level specification for new frontend and backend architectures, and recommended certain changes to the code to reduce long-term maintenance costs.

Implementation

We built a new frontend using TipTap, a ProseMirror-based WYSIWYG editor which gives users a familiar Word-style interface, while generating structured and reliable JSON data for developers to work with. It is an established and flexible project, so provides a solid base for future expansion. We wrote some custom extensions and used htmx to integrate it with the Django backend.

In the backend we designed a new on-demand asynchronous pipeline architecture using Celery and RabbitMQ, sending real-time task updates to the frontend using htmx and websockets. We also prepared the groundwork for production deployments when the project is ready.

We also restructured the Django project to follow best practices, and added a Docker development environment to address consistency issues that developers were experiencing. We introduced linting, testing, and CI tools to improve code quality and ensure the project has a strong foundation for future work.

Outcome

Uzeweb's experience with similar projects meant we were able to rapidly identify the core issues facing the platform, and develop a flexible framework for the internal team to build on with their future development work.