In 2013, as I received my degree in Statistics and Computer Science, I had no idea how the world of technology would evolve—or how deeply it would shape my career. Back then, I didn’t have a clear vision of what I wanted to do, but I’m eternally grateful for every class I took, from Linear Algebra, Regresssion Analysis to Probability Theory. Little did I know, these foundational courses would become the bedrock of my work in Information Theory and AI/ML. Today, as I reflect on the past decade, it’s both humbling and exhilarating to witness the unprecedented growth and innovation unfolding in my lifetime.
Over the years, I’ve had the privilege of working on cutting-edge AI/ML models, particularly in the realm of Privacy-Preserving Technologies which I feel both blessed and cursed at the same time.
Why AI is double-edged sword? AI is no longer just a tool—it’s the new electricity, powering nearly every device and system in the world. Its reach is vast, its potential limitless, and its pace unstoppable. But with great power comes great responsibility. As we open the floodgates and allow information to flow freely, we must ensure it remains clean, safe, private, transparent, and compliant. This is perhaps one of the greatest challenges of our century, and it’s a challenge we cannot afford to ignore.
In this post, I’ll take you on a journey through the breakthroughs we can expect by 2025 and beyond. From the rise of Quantum AI to the ethical dilemmas of Agentic AI, we’ll explore what’s on the horizon—and why it matters. So, The future is coming faster than we think, and it’s time to get ready. Let’s go!
It’s been nearly 75 years since Alan Turing published the iconic paper, "Can machines think?" I often wish he were still alive to witness how his inquiry has reshaped the course of human history. Alan Turing might even have been awarded for the 2024 Nobel Prize in Physics, which was awarded to Professor Geoffrey Hinton and Professor John Hopfield for their groundbreaking work in artificial neural networks. These pioneers have dedicated their lives to advancing AI, and their contributions are nothing short of revolutionary. If you haven’t seen it yet, I highly recommend watching John Hopfield’s Nobel lecture at the age of 91—his passion and the standing ovation he received are a testament to the enduring impact of his work. It’s moments like these that remind us why we do what we do.
2024 has been a landmark year for AI and machine learning. We’ve witnessed breakthroughs that once seemed like science fiction, particularly with the rise of Agentic AI—intelligent systems that can act autonomously across industries like fintech, medtech, biotech, regtech, and even blockchain. Who could have imagined that an AI agent would become a millionaire? Yet, here we are, living in a world where AI is not just a tool but a transformative force behind the bigtech.
AI in Fintech: AI-driven trading algorithms now account for over 70% of global equity trades, generating trillions in revenue annually.
AI in Healthcare: AI models are diagnosing diseases like cancer with 95% accuracy, reducing diagnostic errors by 30% and saving countless lives.
AI in Biotech: AI-designed drugs are speeding up drug discovery by 50%, with over 100 AI-generated drugs currently in clinical trials.
AI in Blockchain: Decentralized AI agents are managing $10 billion+ in decentralized finance (DeFi) assets, autonomously executing trades and optimizing portfolios.
The Speed of Innovation
The pace of AI development is staggering. Just a decade ago, machine learning teams would spend six months collecting 10,000 training examples and painstakingly evaluating features to see results. Today, teams are achieving similar—or better—outcomes with just 1,000 examples and 20 features. This acceleration in experimentation has unlocked unprecedented flexibility, enabling rapid prototyping and innovation. It’s no wonder that 90% of Fortune 500 companies are now investing heavily in AI.
In 2025, we will witness the emergence of artificial capital wars, shifting the focus away from the current chip wars. For instance, the release of the DeepSeek R1 model already ripped out $2 trillion from the stock market, effectively "popping" the AI bubble.*(This was inevitable, see Prof YeJin Choi talk for reference.) *This model, which has been trained using OpenAI's data and is worth millions to develop, demonstrates that a cost-efficient solution with enhanced reasoning can outperform major players like OpenAI, Gemini, and Anthropic. This transition emphasizes the critical role of artificial capital in shaping the future of nations and their global standing. Countries that adeptly utilize their artificial capital will be at the forefront of this new paradigm. Furthermore, governments that implement flexible AI policies will strategically position themselves for long-term innovation. We are on the verge of a significant global power shift. As training costs decrease, we must ensure that we do not compromise ethics, privacy, and governance in the process.
These are my concern is that it should be yours too.
Ethical AI: How do we ensure AI systems are fair, transparent, and free from bias?
Privacy: With AI systems processing vast amounts of personal data, how do we protect user privacy?
Control: As AI systems become more autonomous, who holds the reins? Governments? Corporations? Or the AI itself?
This bring us the core of the AI, Data Governance.
Data is the new currency of the internet. Those who own data hold power in the digital world. Data is at the core of all computer and technology functions, including accounting, finance, planning, control, order management, customer service, scheduling, process control, engineering, and design. Accurate and reliable data is essential for the effective operation of these systems and functions. Data governance is the key concept for managing data in a reliable and consistent manner.
When it comes to AI, data is oil. AI relies heavily on data for its operation and evolution. It uses data to learn, improve, and make predictions. The quality of the data significantly impacts how well the system learns and adapts. This is why data governance is crucial for ensuring the effective and ethical use of AI.
AI ethics and safety are significant concerns in a mindful society, yet we have not seen effective steps toward achieving this goal. I'm not referring to the claims made by Big Tech companies about their commitment to building safe and ethical AI systems. As we all know, C-level corporations hold vast amounts of data accumulated over the years, giving them an unfair advantage in the market. However, there is a check on this power: regulations. How can we hold them accountable?
Compliance by design systems and product is the answers. While data governance initiatives can be driven by a desire to improve data quality, they are often driven by C-level leaders responding to external regulations. In a recent report conducted by CIO WaterCooler community, 54% stated the key driver was efficiencies in processes; 39% - regulatory requirements; and only 7% customer service.[6] Examples of these regulations include Sarbanes–Oxley Act, Basel I, Basel II, HIPAA, GDPR, cGMP,[7] and a number of data privacy regulations. To achieve compliance with these regulations, business processes and controls require formal management processes to govern the data subject to these regulations( Read full report on UK Law here)
The topic of AI regulation is hot and it gonna get hotter, with each country adopting its own approach. In US, On October 30, 2023, President Joe Biden signed a landmark Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence which requires developers of powerful AI systems (e.g., large language models) to share safety test results and critical information with government.
In the UK, the government introduced the AI Safety Institute in November 2023 and released its AI White Paper, which focuses on a sector-specific regulatory approach rather than strict legislation. This approach emphasizes how AI will be used across different sectors, such as healthcare, education, and finance, which seems reasonable for now.
We also observe a more cautious approach from the European Union, which considers the broader societal risks associated with these AI systems.Recently relased AI Act which sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI.They seem missing the innovation train.
Lastly China, which presents the most intriguing approach in understanding the power dynamics within society as influenced by AI tools. I believe they are on the right track; they recognize the power of information and manipulate it to suit their needs.
This year, "AI regulation" is expected to have a significant global impact. We will likely see increased control over the poor implementation of AI models and the end of the era dominated by proprietary models. While open-source models can drive global innovation, they may also attract malicious actors who could undermine societal values, lead to mass surveillance, discrimination, misinformation and threaten freedom of speech. However, this shift could also foster the development of new tools to combat deepfakes, manage AI risks, and address unethical practices.
If you haven't heard about AI agents, you must be living under a rock. In the latter half of 2024, we have seen AI agents make a significant impact on the internet, creating a seismic shift in the market. They are here to stay and are set to transform our lives in profound ways. Is this a good or bad thing? Will they replace the human workforce? Probably yes, for a large part, but they will also provide humans with more freedom to be creative and innovative.But how? Let's look at real story.
Back in 2017, when I started working for Apple as an Information Security agent, I had to be familiar with all the internal company policies(private) related to my role, which focused on customer data privacy, security, and compliance. Imagine being human and needing to remember over 200 policies while ensuring you don’t violate any compliance rules while creating the best customer experience possible. Often, you had to assist customers while searching for the correct compliance policy in the background, all while avoiding any compliance-related drivers. For those who have worked at Apple, you know the company is very strict with its rules. If you comprimise user data privacy with the first compliance rule (such as sharing private data internally), you are required to write a "defense" letter to explain yourself. If you make a second incident, Apple will simply say goodbye.
This scenario sounds like an incredible multitasking challenge, right? It raises the question: how can a human keep all these details straight and execute them flawlessly? Well maybe, Human Agent with Apple badge :). This is how I felt for a short period of my life. We humans are not designed to repeat same things over and over again. In this point we need to build smart systems, agents, assistants that take care of "automated tasks" and give us our time back so we can focus on essence of life. But this should not come with cost of giving up human morals and values.
This quote is more relevant to artificial intelligence today than ever. AI has proven to be an incredibly powerful tool, demonstrating its capabilities by defeating the world-class "Go" champion, excelling in college admission tests, and even passing the bar exam. While many people worry that AI agents will replace them, they often overlook the fact that these agents still lack common sense, a fundamental aspect of human intelligence. Achieving human-level common sense remains a long-standing challenge in AI development and will be a key focus in the years to come.
Currently, AI agents have been designed with features like reasoning and action, often referred to as "ReAct." While they can replicate tasks and execute actions, but they still struggle with common sense.
However, Godfather of AI Professor Jeffrey Hinton expresses less optimisim which he has right to be concern. He warns that the rapid advancement of AI and the potential for subjective experience in AI agents may lead to the development of powerfully manipulitave AIs without ethical considerations, which could ultimately pose a threat to humanity.
This year, we are likely to see more teams launching AI Agents for enterprises, with these agents accomplishing remarkable tasks customer service, frontend developments, scientific reseach and policy advisors. One interesting trend we may observe is the emergence of a new era for search engines, potentially marking the end of traditional links. Instead of saying "Google it" we adopt building our personal/private AI Agents powered by DeepSeek ChatGPT, Perplexity models or locally integrating them with agent APIs to receive direct answers in a single query. We have already start to see this happening with Microsoft Azure, Perplexity integrated Deepseek R1 model in their products to serve their clients.
We are living through a pivotal moment in Agentic AI, one that echoes Richard Feynman’s revolutionary dream in the 1980s: to harness quantum computers to simulate the complexities of nature itself. Feynman’s vision was not just about computation—it was about reimagining the boundaries of science and technology. Decades later, as we explore Quantum Machine Learning (QML), we are inspired by that same audacity. Though quantum computers are not yet mainstream, history reminds us that progress begins with bold dreams. When Alan Turing asked, “Can machines think?” in 1950, modern computers were still a distant reality. Today, Feynman’s dream challenges us to ask equally ambitious questions, pushing the limits of what’s possible and continuing the legacy of visionary thinking that drives humanity forward.
In recent years, there has been an explosive surge of interest in merging quantum information processing with machine learning. This convergence, often referred to as Quantum Machine Learning (QML), promises to redefine the boundaries of what AI can achieve. But how did we get here?
The roots of QML trace back to the mid-1990s, when researchers began exploring quantum models of neural networks. These early investigations aimed to uncover whether quantum mechanics could provide insights into the functioning of the human brain. While these efforts were largely theoretical, they laid the groundwork for a new way of thinking about computation and learning.
By the early 2000s, discussions around statistical learning theory in quantum contexts began to emerge. However, these ideas were fragmented and received limited attention. Notably, researchers like Bonner and Freivalds highlighted the challenges of unifying quantum principles with classical learning frameworks. Despite these hurdles, the seeds of QML had been planted.
Last decade, the field has exploded with activity. Big Tech companies like Google, IBM, Microsoft, as well as a growing number of startups such as Quantinuum, DWave, Xanadu, SandboxAQ, are racing to build QML models that can outperform classical ML approaches in specific domains. For instance:
Quantum Neural Networks (QNNs) are being developed to tackle complex optimization problems.
Quantum Kernels are enabling classification tasks in high-dimensional feature spaces that are intractable for classical systems.This is a very promising trick!
Quantum Generative Models are pushing the boundaries of data generation and simulation.
These advancements are not just theoretical—they are being tested on real quantum hardware, such as IBM's Quantum Experience, Google's Sycamore processor, and Rigetti's quantum computers. While we are still in the Noisy Intermediate-Scale Quantum (NISQ) era, where quantum systems are prone to errors, the progress is undeniable.
The ability to train machine learning (ML) models with massively parallel computation could revolutionize AI development. Today, classical parallel computing (e.g., GPU clusters, TPUs) already accelerates training by splitting workloads across multiple processors. However, even these systems face limitations due to Amdahl's Law, which states that speedup is constrained by the fraction of tasks that must run sequentially.
Quantum Artificial Intelligence (QAI), however, offers a fundamentally different approach. Unlike classical parallelism, quantum computers leverage quantum superposition and entanglement to perform computations in ways that classical systems cannot replicate.
Exponential Speedups: Quantum algorithms like Grover's search and Shor's factorization can solve certain problems exponentially faster than classical algorithms.
Enhanced Feature Spaces: Quantum computers can process and analyze data in high-dimensional Hilbert spaces, enabling more sophisticated pattern recognition.
Hybrid Models: Combining classical and quantum systems allows us to leverage the strengths of both, creating hybrid models that are more powerful than either approach alone.
As we stand on the brink of this new era, it's clear that Quantum AI is not just a futuristic dream—it's a tangible reality that is rapidly taking shape. While challenges like quantum error correction, scalability, and hardware limitations remain, the pace of innovation is accelerating. The race to build quantum-enhanced AI systems is well underway, and the implications for fields like drug discovery, financial modeling, climate prediction, and natural language processing are profound.
In the words of Richard Feynman, "Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical." As we continue to explore the intersection of quantum computing and artificial intelligence, we are not just building better machines—we are unlocking a deeper understanding of the universe itself.