Why I Chose Regulated AI Over Startup Speed
Earlier this year I made a career move that surprised some people. After three years as Chief Scientist at Halialabs — an AI startup in Singapore where I built NLP systems, shipped products, and had the freedom to choose my own tools and timelines — I joined a global bank. Regulated. Enterprise. Committees.
The question I keep getting: why would you leave the speed and autonomy of a startup for the bureaucracy of a bank?
The answer is straightforward. I did not leave despite the constraints. I joined because of them.
What Startups Taught Me
Halialabs was a formative experience. We built NER systems for Southeast Asian languages, document extraction pipelines, and conversational AI prototypes. We moved fast. We deployed models with minimal review. We iterated on production systems weekly. I learned more about practical NLP in three years at a startup than in any other period of my career.
But I also learned the limits of startup AI:
Scale is capped by economics. A startup NER system processes thousands of documents per day. A bank processes millions. The architecture decisions, cost optimization, and reliability requirements are qualitatively different. Startup AI is about proving that something works. Enterprise AI is about proving that it works reliably, at scale, for years, under audit.
Impact is capped by distribution. We built excellent NLP systems at Halialabs. Some of them were used by a few dozen clients. At a bank with operations in 60 countries, a single improvement to a data pipeline touches billions of dollars in assets. The technical challenge may be similar, but the leverage is orders of magnitude higher.
Governance is an afterthought until it is a crisis. At a startup, model governance means "the person who trained the model knows what it does." At a bank, it means model validation, regulatory reporting, audit trails, explainability requirements, and change management. Most AI startups treat governance as overhead. I learned to see it as the thing that determines whether a model actually gets deployed — or sits in a notebook forever.
Why Regulated Environments Are Harder
There is a common misconception that enterprise AI is technically easier than startup AI — that the hard problems are in research labs and startups, while enterprises just deploy off-the-shelf solutions. This is backwards.
Enterprise AI in a regulated industry is harder in almost every dimension that matters for production systems:
Data access is the first constraint. In a startup, you can scrape data, use public datasets, or ask customers to share their data with minimal legal friction. In a bank, data is classified. Personal data is subject to GDPR and local privacy laws. Market data has licensing restrictions. Cross-border data transfer requires legal review. Getting access to the data you need to train a model can take weeks or months, and the data governance requirements shape your architecture before you write a single line of model code.
Model validation is a regulatory requirement. In financial services, models that inform business decisions must go through formal validation — an independent review of the model's methodology, assumptions, performance, and limitations. This is not a code review. It is a regulatory requirement under frameworks like SR 11-7 (Federal Reserve) and SS1/23 (Bank of England). The validation process tests your model against scenarios you did not test, questions assumptions you took for granted, and requires documentation that goes far beyond what any startup would produce. It makes your models better.
Explainability is not optional. A startup can deploy a black-box model and explain results post-hoc if a customer asks. A bank often cannot. Regulators and internal risk functions require that model decisions be explainable — not just "the model says this" but "the model says this because of these factors, with this confidence, and here are the limitations." This forces you to think about interpretability from the start, not as an afterthought.
The cost of failure is asymmetric. If a startup's NER model misclassifies an entity, the customer sees a wrong answer and files a bug report. If a bank's risk model misclassifies an exposure, the consequence could be a regulatory fine, a capital adequacy issue, or a front-page story. This asymmetry changes how you think about testing, monitoring, and fallback mechanisms.
What Regulated AI Forces You to Learn
The constraints of regulated environments are not obstacles to good AI. They are forcing functions for better AI.
You learn to build data platforms before models. In a startup, you build the model first and figure out the data pipeline later. In a regulated enterprise, the data platform is the foundation. Data lineage, quality monitoring, access control, and cataloging are not nice-to-haves — they are prerequisites. You learn that most AI failures are data failures, and that investing in data infrastructure pays dividends across every model you build.
You learn to think in systems, not models. A production AI system in a bank is not a model. It is a model embedded in a data pipeline, connected to source systems, feeding into downstream processes, monitored by operations, governed by risk frameworks, and audited by regulators. The model is the smallest component. If you only know how to train models, you cannot build production AI in this environment. You need to think in systems.
You learn that governance is a design problem. Most AI teams treat governance as a tax — something imposed by compliance that slows them down. I have come to see it as a design problem. How do you build a system where model changes are traceable? Where data lineage is automatic? Where model performance is continuously monitored? Where regulatory reporting is a byproduct of good engineering, not a separate workstream? Governance done well is invisible. Governance done poorly is a bottleneck.
You learn to communicate differently. At a startup, your audience is other engineers. At a bank, your audience includes business leaders who think in terms of revenue and risk, risk managers who think in terms of exposure and controls, and regulators who think in terms of compliance and consumer protection. You learn to explain AI in the language of your stakeholder, not in the language of your framework.
The Opportunity
Here is what I find exciting about enterprise AI right now: it is early.
Most banks, insurance companies, and financial institutions are at the beginning of their AI journey. They have data — enormous amounts of it. They have use cases — risk modeling, fraud detection, document processing, customer analytics, regulatory reporting. They have budget. What they lack is the operating model: the organizational structure, the technical infrastructure, the governance frameworks, and the talent to turn AI from a pilot project into an organizational capability.
Building that operating model is the work I find most interesting. It is not glamorous. It does not produce demos that go viral on Twitter. But it is the work that determines whether AI scales in the organizations that have the most data, the most resources, and the highest stakes.
The question for anyone building AI is: where do you want your work to have the most impact? For me, the answer is in the environments where the constraints are highest — because that is where the solutions are most valuable.
Related
- From NER Pipelines to LLM Agents: How Production NLP Changed in Seven Years
- Why Most Enterprise AI Programs Fail Before They Start
- Building Data Foundations While Everyone Chased Models
Three years ago I chose a startup because I wanted to learn how to build AI systems. Now I am choosing a regulated enterprise because I want to learn how to build AI systems that last. Both are worth doing. The skills compound.