
Building an AI-first company: What these two business leaders learned from top experts
How did your country report this? Share your view in the comments.
Diverging Reports Breakdown
Pros, Cons, Debate, Arguments, Computer Science, & Technology
Artificial intelligence (AI) is the use of computers and machines to mimic the problem-solving and decision-making capabilities of the human mind. The modern notion of AI largely began when Alan Turing created the “ Turing test ” to determine if a computer is capable of “thinking.” The idea of AI dates back at least 2,700 years, and ideas about creating artificial life and robots were explored in ancient myths. Now, AI is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media. More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. For more on the history of AI, see ProCon’s Historical Timeline.
Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa , translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media . Generative AI (a kind of AI used in content creation, including text, images, and music) is also widely used for any writing project, from crafting and sending out resumes and sales pitches to completing homework assignments such as essays and book reports. [13] [58]
More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [1] [10] [11] [12]
AI technology continued to grow at a rapid pace during the 1950s. And, as computers became cheaper in the 1960s and ’70s, AI programs flourished, and U.S. government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early ’90s furthered the research, including the invention of expert systems . But progress again waned with another drop in government funding. [10]
The “Father of Artificial Intelligence,” John McCarthy , coined the term “artificial intelligence” as “the science and engineering of making intelligent machines.” He would go on to create the computer programming language LISP (which is still used in AI), host computer chess games against human Russian opponents, and develop the first computer with “hand-eye” capability, all important building blocks for AI. [1] [5] [6] [7]
The modern notion of AI largely began when Alan Turing , who contributed to breaking the Nazis’ Enigma code during World War II , created the “ Turing test ” to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been debated. [1] [3] [4]
Mayor notes that the myths about Hephaestus , the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man Talos, which had a mysterious life force from the gods called ichor. Hephaestus also created Pandora and her infamous and powerful jar/box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concludes, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [2]
The idea of AI dates back at least 2,700 years. As explained by Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University , “Our ability to imagine artificial intelligence goes back to ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [2]
Artificial intelligence (AI) is the use of “ computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM . [1]
Pro 1: AI can make everyday life more enjoyable and convenient, while improving our health and standard of living. Why sit in a traffic jam when a map app can navigate you around the car accident? Why fumble with shopping bags searching for your keys in the dark when a preset location-based command can have your doorway illuminated as you approach your now unlocked door? [23] Why scroll through hundreds of possible TV shows when the streaming app already knows what genres you like? Why forget eggs at the grocery store when a digital assistant can take an inventory of your refrigerator and add them to your grocery list and have them delivered to your home? All of these marvels are assisted by AI technology. [23] AI-enabled fitness apps boomed during the COVID-19 pandemic when gyms were closed, increasing the number of AI options for at-home workouts. Now, you can not only set a daily steps goal with encouragement reminders on your smart watch, but you can ride virtually through the countryside on a Peloton bike from your garage or have a personal trainer on your living room TV. For more specialized fitness, AI wearables can monitor yoga poses or golf and baseball swings. [24][25] AI can even enhance your doctor’s appointments and medical procedures. It can alert medical caregivers to patterns in your health data as compared to a vast library of medical data, while also doing the paperwork tied to medical appointments so doctors have more time to focus on their patients, resulting in more personalized care. AI can even help surgeons be quicker, more accurate, and less invasive in their operations. [26] Smart speakers including Amazon’s Echo can use AI to soothe babies to sleep and monitor their breathing. Using AI, speakers can also detect regular and irregular heartbeats, as well as heart attacks and congestive heart failure. [27][28][29] AI is even beginning to excel at creative writing, producing fiction and poetry that some readers enjoy. Some observers predict that TV and film scripts will also soon benefit from the compositional powers of AI.
Pro 2: AI makes work easier for students and professionals alike. Much like the calculator did not signal the end of students’ grasp of mathematics, typing did not eliminate handwriting, and Google did not herald the end of research skills. AI does not signal the end of reading and writing or of education in general. [78][79] Elementary school teacher Shannon Morris explains that AI tools like “ChatGPT can help students by providing real-time answers to their questions, engaging them in personalized conversations, and providing customized content based on their interests. It can also offer personalized learning resources, videos, articles, and interactive activities. This resource can even provide personalized recommendations for studying, help with research, provide context-specific answers, and offer educational games.” She also notes that teachers’ more daunting tasks like grading and making vocabulary lists can be streamlined with AI tools. [79] For adults AI can similarly make work easier and more efficient, rather than signaling the rise of the robot employee. Pesky, time-consuming tasks like scheduling and managing meetings, finding important emails amongst the spam, prioritizing tasks for the day, and creating and posting social media content can be delegated to AI, freeing up time for more important and rewarding work. The technology can also help with brainstorming, understanding difficult concepts, finding errors in code, and learning languages via conversation, making daunting tasks more manageable. [80] AI is a tool that, if used responsibly, can enhance both learning and work for everyone. Carri Spector of the Stanford Graduate School of Education says, “I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.” [81]
Pro 3: AI helps marginalized groups by offering accessibility for people with disabilities. Artificial intelligence is commonly integrated into smartphones and other household devices. Virtual assistants, including Siri, Alexa, and Cortana, can perform innumerable tasks from making a phone call to navigating the internet. People who are deaf and hearing impaired can access transcripts of voicemail or other audio, for example. [20] Other virtual assistants can transcribe conversations as they happen, allowing for more comprehension and participation by those who have impairments that affect their communication. Using voice commands with virtual assistants can help people with mobility disabilities who may have difficulty navigating small buttons or screens or turning on a lamp. [20] Apps enabled by AI on smartphones and other devices, including VoiceOver and TalkBack, can read messages, describe app icons or images, and give information such as battery levels for visually impaired people. Other apps, such as Voiceitt, can transcribe and standardize the voices of people with speech impediments. [20] Wheelmap provides users with information about wheelchair accessibility, and Evelity offers indoor navigation tools that are customized to the user’s needs, providing audio or text instructions and routes for wheelchair accessibility. [20] Other AI implementations, such as smart thermostats, smart lighting, and smart plugs, can be automated to work on a schedule to aid people with mobility or cognitive disabilities to lead more independent lives. [21] More advanced AI projects can combine with robotics to help physically disabled people. HOOBOX Robotics, for example, uses facial recognition software to allow a wheelchair user to move their wheelchair with facial expressions, making movement easier for seniors and those with ALS or quadriparesis. [22]
Pro 4: Artificial intelligence can improve workplace safety. AI doesn’t get stressed, tired, or sick, three major causes of human accidents in the workplace. AI robots can collaborate with or replace humans for especially dangerous tasks. For example, 50 percent of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [14][15] Artificial intelligence can also help humans be safer. For instance, AI can ensure employees are up to date on training by tracking and automatically scheduling safety or other training. AI can also check and offer corrections for ergonomics to prevent repetitive stress injuries or worse. [16] An AI program called AI-SAFE (Automated Intelligent System for Assuring Safe Working Environments) aims to automate the workplace personal protective equipment (PPE) check, eliminating human errors that could cause accidents in the workplace. As more people wear PPE to prevent the spread of COVID-19 and other viruses, this sort of AI could protect against large-scale outbreaks. [17][18][19] In India, AI was used during the coronavirus pandemic to reopen factories safely by providing camera, cell phone, and smart wearable device-based technology to ensure social distancing, take employee temperatures at regular intervals, and perform contact tracing if anyone tested positive for the virus. [18][19] AI can also perform more sensitive tasks in the workplace such as scanning work emails for improper behavior and types of harassment.[15]
“My biggest lesson was realizing that domain expertise matters more than algorithmic complexity.“
Claudia Ng is an AI entrepreneur and data scientist with 6+ years of experience building production machine learning models in FinTech. She placed second and won $10,000 in a Web3 credit scoring ML competition in 2024. “I take a problem-first approach rather than lead with AI,” she says. “Too many companies want to “do something with AI’ without identifying what they’re trying to solve, which don’t move the needle,” she adds. “As a polyglot who speaks nine languages, I’ve thought deeply about the challenges of achieving conversational fluency and not just textbook knowledge when learning a foreign language” “I’m pursuing this through two main paths: consulting projects that leverage my data science and machine learning expertise, and building an AI language learning product,” says Ng. “The consulting work provides immediate revenue and keeps me connected to real business problems, while the language product represents my long-term vision,” she concludes.
Claudia is an AI entrepreneur and data scientist with 6+ years of experience building production machine learning models in FinTech. She placed second and won $10,000 in a Web3 credit scoring ML competition in 2024.
You recently won $10,000 in a machine learning competition — congratulations! What was the biggest lesson you took away from that experience, and how has it shaped your approach to real-world ML problems?
My biggest lesson was realizing that domain expertise matters more than algorithmic complexity. It was a Web3 credit scoring ML competition, and despite never having worked with blockchain data or neural networks for credit scoring, my 6+ years in FinTech gave me the business intuition to treat this as a standard credit risk problem. This perspective proved more valuable than any degree or deep learning specialization.
This experience fundamentally shifted how I approach ML problems in two ways:
First, I learned that shipped is better than perfect. I spent only 10 hours on the competition and submitted an “MVP” approach rather than over-engineering it. This applies directly to industry work: a decent model running in production delivers more value than a highly optimized model sitting in a Jupyter notebook.
Second, I discovered that most barriers are mental, not technical. I almost didn’t enter because I didn’t know Web3 or feel like a “competition person”, but in retrospect, I was overthinking it. While I’m still working on applying this lesson more broadly, it has changed how I evaluate opportunities. I now focus on whether I understand the core problem and whether it excites me, and trust that I’ll be able to figure it out as I go.
Your career path spans business, public policy, machine learning, and now AI Consultant. What motivated your shift from corporate tech to the AI freelance world, and what excites you most about this new chapter? What kinds of challenges or clients are you most excited to work with?
The shift to independent work was driven by wanting to build something I could truly own and grow. In corporate roles, you build valuable systems that outlive your tenure, but you can’t take them with you or get ongoing credit for their success. Winning this competition showed me I had the skills to create my own solutions rather than just contributing to someone else’s vision. I learned valuable skills in corporate roles, but I’m excited to apply them to challenges I care deeply about.
I’m pursuing this through two main paths: consulting projects that leverage my data science and machine learning expertise, and building an AI language learning product. The consulting work provides immediate revenue and keeps me connected to real business problems, while the language product represents my long-term vision. I’m learning to build in public and sharing my journey through my newsletter.
As a polyglot who speaks nine languages, I’ve thought deeply about the challenges of achieving conversational fluency and not just textbook knowledge when learning a foreign language. I’m developing an AI language learning partner that helps people practice real-world scenarios and cultural contexts.
What excites me most is the technical challenge of building AI solutions that take into account cultural context and conversational nuance. On the consulting side, I’m energized by working with companies that want to solve real problems rather than just implementing AI for the sake of having AI. Whether it’s working on risk models or streamlining information retrieval, I love projects where domain expertise and practical AI intersect.
Many companies are eager to “do something with AI” but don’t always know where to start. What’s your typical process for helping a new client scope and prioritize their first AI initiative?
I take a problem-first approach rather than lead with AI solutions. Too many companies want to “do something with AI” without identifying what specific business problem they’re trying to solve, which usually leads to impressive demos that don’t move the needle.
My typical process follows three steps:
First, I focus on problem diagnosis. We identify specific pain points with measurable impact. For example, I recently worked with a client in the restaurant space facing slowing revenue growth. Instead of jumping to an “AI-powered solution,” we examined customer review data to identify patterns. For example, which menu items drove complaints, what service elements generated positive feedback, and which operational issues appeared most frequently. This data-driven diagnosis led to specific recommendations rather than generic AI implementations.
Second, we define success upfront. I insist on quantifiable metrics like time savings, quality improvements, or revenue increases. If we can’t measure it, we can’t prove it worked. This prevents scope creep and ensures we’re solving real problems, not just building cool technology.
Third, we go through viable solutions and align on the best one. Sometimes that’s a visualization dashboard, sometimes it’s a RAG system, sometimes it’s adding predictive capabilities. AI isn’t always the answer, but when it is, we know exactly why we’re using it and what success looks like.
This approach has delivered positive results. Clients typically see improved decision-making speed and clearer data insights. While I’m building my independent practice, focusing on real problems rather than AI buzzwords has been key to client satisfaction and repeat engagements.
You’ve mentored aspiring data scientists — what’s one common pitfall you see among people trying to break into the field, and how do you advise them to avoid it?
The biggest pitfall I see is trying to learn everything instead of focusing on one role. Many people, including myself early on, feel like they need to take every AI course and master every concept before they’re “qualified.”
The reality is that data science encompasses very different roles: from product data scientists running A/B tests to ML engineers deploying models in production. You don’t need to be an expert at everything.
My advice: Pick your lane first. Figure out which role excites you most, then focus on sharpening those core skills. I personally transitioned from analyst to ML engineer by intensely studying machine learning and taking on real projects (you can read my transition story here). I leveraged my domain expertise in credit and fraud risk, and applied this to feature engineering and business impact calculations.
The key is applying these skills to real problems, not getting stuck in tutorial hell. I see this pattern constantly through my newsletter and mentoring. People who break through are the ones who start building, even when they don’t feel ready.
The landscape of AI roles keeps evolving. How should newcomers decide where to focus — ML engineering, data analytics, LLMs, or something else entirely?
Start with your current skill set and what interests you, not what sounds most prestigious. I’ve worked across different roles (analyst, data scientist, ML engineer) and each brought valuable, transferable skills.
Here’s how I’d approach the decision:
If you’re coming from a business background: Product data scientist roles are often the easiest entry point. Focus on SQL, A/B testing, and data visualization skills. These roles often value business intuition over deep technical skills.
If you have programming experience: Consider ML engineering or AI engineering. The demand is high, and you can build on existing software development skills.
If you’re drawn to infrastructure: MLOps engineering is highly in demand, especially as more companies deploy ML and AI models at scale.
The landscape keeps evolving, but as mentioned above, domain expertise often matters more than following the latest trend. I won that ML competition because I understood credit risk fundamentals, not because I knew the fanciest algorithms.
Focus on solving real problems in domains you understand, then let the technical skills follow. To learn more about different roles, I’ve written about the 5 types of data science career paths here.
What’s one AI or data science topic you think more people should be writing about or one trend you’re watching closely right now?
I’ve been blown away by the speed and quality of text-to-speech (TTS) technology in mimicking real conversational patterns and tone. I think more people should be writing about TTS technology for endangered language preservation.
As a polyglot who’s passionate about cross-cultural understanding, I’m fascinated by how AI could help prevent languages from disappearing entirely. Most TTS development focuses on major languages with massive datasets, but there are over 7,000 languages worldwide, and many are at risk of extinction.
What excites me is the potential for AI to create voice synthesis for languages that might only have a few hundred speakers left. This is technology serving humanity and cultural preservation at its best! When a language dies, we lose unique ways of thinking about the world, specific knowledge systems, and cultural memory that can’t be translated.
The trend I’m watching closely is how transfer learning and voice cloning are making this technically feasible. We’re reaching a point where you might only need hours rather than thousands of hours of audio data to create quality TTS for new languages, especially using existing multilingual models. While this technology raises valid concerns about misuse, applications like language preservation show how we can use these capabilities responsibly for cultural good.
As I continue developing my language learning product and building my consulting practice, I’m constantly reminded that the most interesting AI applications often come from combining technical capabilities with deep domain understanding. Whether it’s building machine learning models or cultural communication tools, the magic happens at the intersection.
To learn more about Claudia‘s work and stay up-to-date with her latest articles, you can follow her on TDS, Substack, or Linkedin.
UAE: Dubai now inviting AI experts and companies to collaborate on building AI-driven government services
Dubai is inviting global AI innovators to co-design the next era of government services. With applications now open for the second cycle of its “Future of AI in Government Services Accelerator”, the Dubai Centre for Artificial Intelligence (DCAI) is offering a platform where cutting-edge technology meets public service transformation. The accelerator is designed to match private sector AI expertise with the needs of over 20 participating Dubai government entities. It targets four specific areas: Enhancing existing public services through AI-driven personalisation and efficiency.Integrating AI into daily operations to drive internal government productivity.Making services more inclusive, particularly by developing solutions that break accessibility barriers. Companies receive a serial certification that identifies them as trusted AI partners, which can be showcased on digital platforms or on marketing platforms. The first cycle attracted 615 companies from 55 countries, resulting in 183 defined AI use cases, of which 75 were piloted in real government environments. Over eight weeks, startups and government teams co-developed and tested AI applications tailored to public sector needs.
Tired of too many ads? go ad free now
Second cycle of AI Accelerator: What’s new and what It offers
The Dubai Centre for Artificial Intelligence (DCAI) has officially begun accepting applications for the second edition of its “Future of AI in Government Services Accelerator.” Organised in partnership with Dubai Future Accelerators, an initiative of the Dubai Future Foundation (DFF), the programme is inviting both local and international tech innovators and AI-focused companies from around the world.
This accelerator is designed to match private sector AI expertise with the needs of over 20 participating Dubai government entities. It targets four specific areas:
Enhancing existing public services through AI-driven personalisation and efficiency.
Creating new services that solve previously intractable challenges.
Integrating AI into daily operations to drive internal government productivity.
Making services more inclusive, particularly by developing solutions that break accessibility barriers.
Programme details:
Applications for the programme are open until 29 August 2025, with selected companies invited to join an intensive, eight-week engagement in Dubai from 6 October to 28 November 2025.
Participants will be hosted on-site in Dubai, where they’ll work directly with government teams, AI experts, and decision-makers on developing and deploying real solutions.
The programme is designed to eliminate financial and operational barriers for participants. Startups will receive fully sponsored accommodation and travel, enabling them to focus entirely on innovation. Importantly, the accelerator takes no equity stake, ensuring companies retain full ownership of their intellectual property.
Participants will also benefit from direct access to senior government officials, along with daily collaboration alongside subject matter experts across relevant public sectors. This structure not only accelerates implementation but also opens up opportunities for longer-term partnerships within Dubai’s evolving AI ecosystem.
Saeed Al Falasi, Executive Director of DCAI, described the accelerator as a step toward building a city where AI makes public services faster, smarter, and more responsive.
Tired of too many ads? go ad free now
He stated that the programme aims to help place Dubai among the world’s top cities in delivering agile, AI-powered public services, with quality of life, efficiency, and happiness at the centre.
Outcomes from First Cycle
Launched in 2024, the inaugural cycle attracted 615 companies from 55 countries, resulting in 183 defined AI use cases, of which 75 were piloted in real government environments. Over eight weeks, startups and government teams co-developed and tested AI applications tailored to public sector needs, signalling Dubai’s serious commitment to applied innovation.
Dubai Centre for Artificial Intelligence: Structure and purpose
Founded in June 2023, the DCAI is a multi-agency initiative led by the Dubai Future Foundation, alongside the Digital Dubai Authority, Dubai Electricity and Water Authority (DEWA), and the Dubai Media Council. It was launched under the direction of H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, and Chairman of DFF’s Board of Trustees.
DCAI’s core mission is to scale AI across Dubai’s government ecosystem. It operates along three strategic pillars:
Legislation & Policy: Developing governance models, regulations, and ethical standards for responsible AI adoption.
Awareness & Capacity-Building: Delivering AI literacy and training programs for civil servants and the wider public.
Technology Adoption & Innovation: Partnering with startups, tech companies, and global institutions to deploy AI in real-world government use cases.
A key part of DCAI’s approach is to make AI trustworthy, inclusive, and beneficial for every segment of society—not just technologically advanced, but human-centered and ethically grounded.
Expanding the ecosystem: From AI Seal to AI Week
DCAI’s AI accelerator is just one pillar of a broader strategy designed to turn Dubai into a global AI capital. Several major initiatives have been launched under this vision:
Dubai AI Seal
Introduced in January 2025, the Dubai AI Seal is a first-of-its-kind certification programme that identifies AI companies as trusted partners for government use. Companies awarded the seal receive a unique serial number and classification, which can be showcased across digital and marketing platforms.
To work on UAE or Dubai government AI projects, certification under the AI Seal has become a mandatory requirement. This promotes trust and accountability in deploying advanced AI systems.
One Million Prompters
Unveiled in May 2024, this initiative aims to train one million people in the emerging discipline of AI prompt engineering over three years. Participants gain practical skills in crafting and optimising prompts for generative AI systems, a capability seen as foundational for the future digital workforce.
The One Million Prompters initiative builds on the success of the Global Prompt Engineering Championship, launched under the directives of Sheikh Hamdan bin Mohammed in May 2024. Held at the Museum of the Future, it attracted participants from nearly 100 countries, with competitions in literature, art, and coding, awarding AED 1 million in prizes to promote generative AI leadership.
The 2025 edition, hosted during Dubai AI Week, attracted over 3,800 participants from 125 countries, with 24 finalists from 16 nations competing live in Dubai.
Winners included Abdulrahman Al Marzooqi (UAE) in Coding, Ibrahim Hajjo (Syria) in Video, Ibrahim Helmy (Canada) in Gaming, and Yahya Kaddoura (Palestine) in Art. Organised by DCAI and supported by the Dubai Chamber of Digital Economy, the event awarded a share of AED 1 million in prizes and served as a key feature of Dubai’s growing AI talent agenda.
Dubai AI Academy
Launched during Dubai AI Week 2025, the Academy aims to train 10,000 professionals—including government officials and business leaders, in AI leadership and application.
Operated by DCAI, it is housed within the Dubai AI Campus at DIFC Innovation Hub.
Educational content is delivered in collaboration with:
Oxford Saïd Business School
Udacity
The Minerva Project
The academy offers courses like
AI for Civil Service
and
Leadership in the Age of AI
, aimed at equipping leaders with the skills to apply AI strategically and responsibly. The curriculum combines technical knowledge with business and policy insights, aligning with the Dubai Universal Blueprint for AI and the DIFC 2030 Strategy.
Dubai AI Week
Launched in April 2025, Dubai AI Week, now in its second edition, is another flagship initiative of the Dubai Centre for Artificial Intelligence, organised with the Dubai Future Foundation under the directives of H.H. Sheikh Hamdan bin Mohammed. Held from 21 to 25 April across the Museum of the Future, Emirates Towers, and Madinat Jumeirah, the event featured over 180 sessions and workshops, more than 150 activations, delegations from 140+ countries, and participation from over 25 global technology companies.
With headline events including the AI Retreat, Dubai Assembly for AI, Global Prompt Engineering Championship, AI Week in Schools Dubai AI Festival, and the Machines Can See conference, the week reinforced Dubai’s position as a global hub for artificial intelligence, all aligned with the Dubai Universal Blueprint for AI.
Dubai Future Foundation: The Strategic Backbone
The Dubai Future Foundation (DFF) is the government entity responsible for shaping the city’s long-term innovation agenda through emerging technologies, strategic foresight, and cross-sector collaboration.
Established in 2016, the Foundation operates under the directives of H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, and serves as the umbrella organisation overseeing all initiatives, programmes, and projects driven by the Dubai Centre for Artificial Intelligence (DCAI).
From accelerators and AI academies to global certification schemes, talent development efforts, and flagship events like Dubai AI Week, DFF ensures every initiative is aligned with Dubai’s broader transformation goals, particularly those outlined in the D33 Economic Agenda.
Its role is not only to incubate future-ready programmes but also to embed governance, scalability, and ethical deployment into Dubai’s AI ecosystem.
Strategic alignment: D33 Agenda and the Universal Blueprint for AI
All of these efforts, from accelerators to certifications and academies, fall under a unified vision set out in the Dubai Universal Blueprint for Artificial Intelligence. This overarching strategy aims to drive digital transformation and is aligned with the D33 Economic Agenda, which targets a AED 100 billion (USD 27 billion) annual contribution to GDP through AI and digital innovation.
The Blueprint’s goals include:
Embedding AI into all strategic sectors
Establishing AI and Web3 incubators
Appointing Chief AI Officers in every government entity
Launching AI Weeks in schools and universities
Issuing commercial AI licences
Allocating land for AI-ready data centres
Delivering world-leading public services through AI integration
The D33 plan positions Dubai as not only a leader in AI deployment but also in AI governance, education, talent development, and infrastructure.
Weâre all techies now: Digital skill building for the future
Business leaders have come to a stark revelation: The gap between their companies’ tech workers and their nontech colleagues must shrink. McKinsey research shows that companies with leading digital and AI capabilities outperform lagging competitors by two to six times in terms of total shareholder returns. Companies that are slow to launch skill-building efforts risk missing out on important benefits for their people and for the organization itself. In a recent survey of more than 80 leaders of tech-focused US organizations, 80 percent of respondents say upskilling is the most effective way to reduce employee skills gaps. However, only 28 percent of organizations are planning to invest in upSkilling programs over the next two to three years. In this article, we look at why upskilled is both a business and talent imperative, how organizations can start to build their digital capabilities, and how some companies have created savvier tech teams to avoid falling behind their competitors. The article is a collaborative effort by Brooke Weddle Heather Stefanski, Maisha Glover, Emily Rizzi and Hannah Mowery.
Achieving the full benefits of digital and AI technologies at scale is critical, as the gap between tech leaders and laggards is widening. McKinsey research shows that companies with leading digital and AI capabilities outperform lagging competitors by two to six times in terms of total shareholder returns. Reaching this level of success requires not only talent with the deep technical skills to deploy and innovate new technologies but also an employee base that is more digitally fluent overall.
About the authors Bryan Hancock, This article is a collaborative effort by Brooke Weddle Heather Stefanski , and Maisha Glover, with Emily Rizzi and Hannah Mowery, representing views from McKinseyâs People & Organizational Performance Practice.
Now more than ever, for organizations to perform at their best, all employees need to be techies. Executives, too, need to become more tech-savvy. Business leaders are increasingly responsible for delivering tech-enabled products, which requires a broader and stronger technical foundation. Depending on their business, they may need to know where the company is on its cloud migration journey to understand the true costs of new products. They may need enough enterprise architecture knowledge to understand the trade-offs between custom-developed and off-the-shelf solutions. They may need sophisticated insights into cybersecurity risks. Strong data governance relies on stewards who understand what data is needed, what it means, and how to leverage analytics and machine learning. With greater technical knowledge, business leaders can prioritize rewiring their organizationsâdeeply integrating technology across all core processesâto gain competitive advantages.
This reality is compelling companies to take new approaches in enhancing employeesâ technical skills to improve their flexibility, productivity, and performance, as well as to retain top talent. Rather than rolling out one-size-fits-all training efforts, the companies that are most successful in upskilling prioritize targeted efforts that close skills gaps for talent in areas that are critical to their long-term strategy. They meet learners where they are, offering a variety of virtual and in-person programs tailored to remote and hybrid workforcesâand they take learning beyond the classroom into the real world. They create cultures of continuous learning and improvement that keep current employees engaged and motivated while attracting new talent who strive to develop in their careers. And they tie learning to critical business outcomes, incentivizing leaders to establish effective upskilling programs and holding them accountable for results.
After many interviews with business leaders in technology and other sectors, we see opportunities for more companies to develop better programs to raise their digital competencies. In this article, we look at why upskilling is both a business and talent imperative, how organizations can start to build their digital capabilities, and how some companies have created savvier tech teams to avoid falling behind their competitors.
Mutual benefits of upskilling
The need to focus on skill building is not new, but it has taken on greater urgency as labor markets tighten and companies have greater demand for people who can keep up with new technologies that are reshaping how work gets done. In a recent survey of more than 80 leaders of tech-focused US organizations, 80 percent of respondents say upskilling is the most effective way to reduce employee skills gaps (Exhibit 1). However, the survey reveals that only 28 percent of organizations are planning to invest in upskilling programs over the next two to three years.
Companies that are slow to launch skill-building efforts risk missing out on important benefits for their people and for the organization itself. Previous McKinsey research has shown that skills are the top barrier for employees who are willing to switch occupations and expand their career opportunities. Workers who seek upskilling represent a highly motivated, desirable talent pool for employers. At the same time, companies that excel in people development achieve more consistent profits and demonstrate higher resilience. They are also better at retaining talent, with attrition rates about five percentage points lower than those of organizations that focus more heavily on financial performance. Organizations that focus on both human capital development and financial performance are four times as likely to outperform their competitors financially.
Time to get started
As we listen to CEOs, their message is clear: The future is profoundly tech enabled. According to recent McKinsey research, 92 percent of companies plan to increase their investments in AIÂ over the next three yearsâand only 1 percent of leaders describe their current AI deployment as âmature.â In this rapidly evolving environment, companies urgently need to help their employees enhance their tech skill sets and better understand both their industries and their organizations. One-third of the tech leaders we interviewed cite a lack of industry- and company-specific knowledge as one of the biggest skills gaps in their organizations. One respondent, who notes that there has been a competition for talent for decades, says the biggest challenge now is developing and retaining talent who have technical skills along with knowledge of the company, its technical capabilities, and its ways of working.
A previous McKinsey analysis of 4.3 million job postings across technology sectors reveals a wide skills gap, with fewer than half the number of potential candidates having the high-demand tech skills listed in job postings. The World Economic Forum estimates that nearly six in ten workers will require training before 2030 and that 22 percent of jobs globally will change due to technological advancements, the transition to a more sustainable economy, and demographic and geoeconomic shifts.
Meanwhile, McKinsey research reveals that while nearly all employees and C-suite leaders have some familiarity with gen AI tools, nearly half of employees want more formal training. In the survey of US tech leaders, respondents cite several reasons for their companiesâ skills gaps, including outdated skill sets and insufficient training, at 46 percent; lack of experience among employees, at 43 percent; and misalignment between skill sets being taught in educational institutions and the actual needs of organizations, at 37 percent (Exhibit 2). To reap the full benefits of AI, companies will need to fill these skills gaps.
To meet this moment, companies can take five steps to begin investing in and developing their upskilling efforts:
Identify skills that are most important to long-term strategy and align business leaders. Companies canât upskill in every area of work. Business leaders should prioritize the skills that will help their organizations win against the competition, close their biggest gaps, and attract and retain top talent. Ensure that upskilling is a shared priority among senior leaders and that they serve as role models to reinforce its importance for the organization.
Create a holistic upskilling strategy. Learn from organizations that have succeeded with upskilling and craft a strategy that will work for your team. Start by piloting ideas with small and open groups and scale from there. Build support within the organization as you adopt certain initiatives to make sure they stick.
Develop learning experiences quickly and iteratively. Leverage analytics, gen AI, and partners such as universities and reputable learning providers to swiftly deliver upskilling and retooling curricula to meet business needs. Reimagine curricula with a greater emphasis on teaching practical applications, embedding real-time feedback into tasks, and leveraging new tools.
Put the learner in the driverâs seat. Build a culture of learning and ownership. Help employees understand their roles in upskilling and integrate learning into day-to-day work to develop an always-on approach, enabling them to take charge of their journeys rather than waiting for formal training from the organization.
Reinforce the value of learning throughout the employee life cycle. Explicitly link skill building to employeesâ goal setting, incentives, and career development. Ensure that managers provide employees with appropriate time and support to learn. Encourage managers to be teachers as employees apply their new skills beyond the classroom.
While taking these first steps, it is important to remember that it takes time to realize the full potential ROI from skill building. Leaders need to take a future-backed view of the opportunity and rewire the work itself to deliver and reinforce the learning. The common 70/20/10 framework suggests that employees gain 70 percent of their skills on the job, 20 percent from others, and 10 percent from formal learning. The principle of building more skills beyond formal learning will always be true, but this ratio may change as the lines blur between formal and on-the-job learning in the world of advanced-AI adoption. The future of learning isnât about adding more training on top of work; itâs about reimagining work itself as inherently developmental. In this new paradigm, learning is not a âgo away and doâ activity but a seamless, integrated part of the work experienceâpersonalized, continuous, and directly tied to business outcomes.
Effective skill building to close the gaps
Through our interviews with tech leaders and our work with companies on upskilling efforts, we find that skills gaps fall into three main categories: technical foundations, technical expertise, and business fundamentals. Taking a deeper dive into each, we identify specific skills targeted most often by companies and highlight examples of organizations that are succeeding in building their teamsâ digital capabilities.
Technical foundations for all
Organizations can enhance employeesâ baseline fluency on relevant tech-related topics to help them develop strong learning mindsets and adjust quickly as technology changes. The top basic tech skills that successful organizations focus on include gen AI and other emerging technologies, agile methodologies, data fluency, and engineering. They also ensure that employees understand the companyâs current tech stack and how its components and teams work together.
Successful companies tend to work with learning partners to develop these skills virtually through live or on-demand courses. For organization-specific needs, they augment readily available courses with customized content that is cocreated by external learning and development professionals and internal subject matter experts.
In one example, a global consumer-packaged-goods company developed a digital academy that enrolled 3,000 employees to help the company build skills needed for a digital transformation of its manufacturing and supply chain operations. The senior-leadership team kicked off the process by aligning on a vision for the companyâs digital transformation. The organization then developed more than 100 hours of learning content, which employees accessed through self-paced online courses as well as remote and in-person workshops. Content was tailored for specific roles, including frontline workers, change teams, and senior leaders. Importantly, the learning journey extended beyond the classroom to fundamentally change ways of working in the frontline-operations team. In the first 18 months after the launch of the digital academy, the company achieved a 20 to 40 percent increase in throughput and productivity within the team.
Deeper expertise for tech roles
Companies can help employees develop deeper technical knowledge and experience in their roles to address urgent business needs. These include areas such as AI and machine learning, cloud technology, product management, cybersecurity, and architecture. Successful organizations rapidly develop and customize training programs for these next-level tech skillsâand they often provide learning opportunities through employee-led pathways (programs designed around individualsâ aspirations and goals) and external certification channels (see sidebar, âWhatâs in a badge?â).
Whatâs in a badge? Employees want more than just learning from their training. They also seek recognition and a way to communicate their new skills, both internally and externally. The following three forms of recognition can help meet this need, and organizations can use them in any combination or all together: Internal credentials. These are markers of proficiency that an organization awards to employees who demonstrate that they have learned and applied certain skills.
Badges. Companies award these emblems internally to employees for completing a training course or program.
Certifications. These are endorsements from a third party that help validate an employeeâs knowledge and competency and are recognized in the external talent marketplace.
Which advanced tech skills companies prioritize in their upskilling efforts depends on their strategy and their digital maturity. Established, low-growth organizations tend to be the least advanced in their digital capabilities and have dated tech stacks, so they often focus on skills that will help them modernizeâfor example, cloud, DevSecOps (development, security, and operations), and architecture. Stable organizations that are growing in low- to mid-single digits and have started on their digital journeys focus on unlocking new growth by embracing product management and strengthening connections between technology and the business. Tech-focused companies, at the forefront of embracing new technologies, emphasize building skills that enhance the customer experience and accelerate growth.
Employee-led upskilling initiatives work well when they prioritize skills that help the organization develop a long-term competitive advantage and retain highly skilled talent. For example, a large professional-services firm launched a new AI consulting practice and needed a program to train hundreds of existing and new employees per quarter in AI, blockchain, robotics, and other advanced technologies. Given the speed of innovation, the program had to be able to scale up or down and evolve its content quickly as skill needs changed. Instead of instituting a more conventional learning program, the chief learning officer established a three-month âskills acceleratorâ that integrated learning into employeesâ day-to-day work and kept pace with client demand and staffing needs. It incorporated self-assessed online exercises, âbootcampâ-style intensive workshops, and apprenticeships with the firmâs technology experts. The firm used the training results to assign staff to real-world projects, which provided further learning opportunities, and tracked the revenue impact. This initiative helped the firm become more agile and responsive to technology changes while allowing employees to feel more confident about keeping their positions in a tough job market.
Business fundamentals for tech talent
Just as business leaders need to understand tech, tech employees need to understand business. Employers can assist employees in tech roles with developing their business acumen, organizational knowledge, and the soft skills required to influence change across the company. These include complex problem-solving, creative thinking, communications and storytelling, stakeholder engagement, people management, and conflict resolution.
Companies that excel in helping upskill tech employees in these areas take a cross-functional approach to the content and tailor it for the tech organization, providing tangible examples to help employees apply it to their own day-to-day context. One global retailer recently created a comprehensive training academy for tech talent to develop both foundational business and tech skills, with a path to gaining deeper technical expertise over time. The companyâs tech leadership was integrally involved in identifying the skill priorities for the program. These include building capabilities for managing both technical and nontechnical teams, as well as developing strategy and communications skills and institutional knowledge to lead change efforts in a large, matrixed environment. After successful pilots with more than 60 employeesâmore than 90 percent of whom recommended the program for their colleaguesâthe company is now scaling the effort to train more than 1,800 employees in its first full year. This initiative is also attracting new tech talent who are seeking more professional development.
The three categories of skills gaps described above affect people at all levels of organizations. Tech employees need upskilling in all three areasâfoundational knowledge of how technology works in their organizations, deeper skills in their technical discipline to deliver cutting-edge work, and a fundamental understanding of how their work fits into the bigger picture of the business. Team leaders and middle managers in tech roles need the expertise and credibility to lead tech workers, along with the ability to work cross-functionally and drive business results in complex environments. Also, senior managers and executives should have enough technical fluency to use it to create strategy and deliver business results (assuming they already have strong fundamental business skills).
Future upskilling efforts
With almost a quarter of US jobs expected to change by 2027 as the pace of technological innovation intensifies, upgrading skills is a strategic imperative for companies and employees. Building upon the strategies and examples covered above, organizations can consider the following forward-looking practices, which have the potential to accelerate skill building:
Bring learning into the flow of work. Develop managersâ coaching and teaching capabilities. Encourage first-line leaders, for instance, to assign their staff to tasks that require the skills theyâve learned. Demonstrate the power of experimentation and reflection on the job. Train managers to ask their direct reports regularly: âWhat have you learned this week? How can you use it next week?â Such subtle changes can rapidly boost upskilling impact.
Embed learning into the technology. Gen AI can help organizations quickly teach in real time. For example, if managers need to learn better protocols for holding staff meetings, they can use gen AI to explain their difficulties, receive instant suggestions for potential interventions, and even run simulations to test their skills. We are also seeing this application in contact centers, where the technology provides feedback and real-time instruction, like a thoughtful coach observing. Gen AIâpowered tools, of course, must be managed carefully to protect against errors, but the speed of creation offers significant potential for learning and development teams.
Upskill experts and managers to become âlearner teachers.â Experienced, highly skilled employees can be trained to serve as instructors and help learners put skills into real-world business contexts. At some organizations where this practice is being used, employees say they find it much more powerful to learn from peers rather than traditional facilitators. This is also true outside the classroom, where managers play an important role in coaching and teaching new skills in context.
Integrate and gamify the user experience of learning. Companies can use both internally and externally developed content and experiences embedded in technology tools, but they need to provide employees with a seamless user experience to make it easier to learn and track progress. Features such as interactive learning and collecting badges for achieving certain learning milestones can make the experience more enjoyable for employees.
Codify and democratize technical knowledge. Companies can develop technical-knowledge repositories to help employees learn from tools and techniques used by others and get answers to frequently asked questions. Employees can also review and âupvoteâ certain information so others know it is credible and useful. Integrating AI into these repositories can make them even more effective. The repositories can also help speed up onboarding, which is a big pain point for companies that are hiring a lot of tech talent.
More and more employees have started to use digital technologies in their workâa trend that will only continue in the years to come. To keep pace, organizations can accelerate their efforts to help their nontech employees build their digital skills. At the same time, companies can provide more advanced training for their tech teams to teach them business fundamentals so they can help make the organization more productive and competitive. Developing an effective upskilling strategy requires leaders to identify their organizationsâ biggest gaps and opportunities and to align their corporate strategy and governance with responsive learning-and-development programs so that everyone is included in the effort to build digital capabilities for the future.
27 of the best large language models in 2025
Large language models are the dynamite behind the generative AI boom. Some belong to big companies such as Google, Amazon and Microsoft; others are open source. Some of the most well-known language models today are based on the transformer model. More recent versions of the Ernie chatbot include Ernie X1 and Ernie 45 series of LLMs, which Baidu is open-sourcing. The Claude LLM focuses on constitutional AI, which shapes AI outputs guided by a set of principles that aim to make the AI assistant it powers helpful, harmless and accurate. The Cohere LLM can be custom-trained and fine-tuned to a specific company’s use case, as can DeepSeek-R1, which can perform critical problem-solving through self-verification, chain-of-thought reasoning and reflection, and Falcon, which uses reinforcement learning techniques to refine its reasoning ability and solve complex problems. The Ernie Chatbot has garnered more than 45 million users.
LLMs are black box AI systems that use deep learning on extremely large datasets to understand and generate new text. Modern LLMs began taking shape in 2014 when the attention mechanism — a machine learning technique designed to mimic human cognitive attention — was introduced in a research paper titled “Neural Machine Translation by Jointly Learning to Align and Translate.” In 2017, that attention mechanism was honed with the introduction of the transformer model in another paper, “Attention Is All You Need.”
Some of the most well-known language models today are based on the transformer model, including the generative pre-trained transformer series of LLMs and bidirectional encoder representations from transformers (BERT).
ChatGPT, which runs on a set of language models from OpenAI, attracted more than 100 million users just two months after its release in 2022. Since then, many competing models have been released. Some belong to big companies such as Google, Amazon and Microsoft; others are open source.
Constant developments in the field can be difficult to keep track of. Here are some of the most influential models, both past and present. Included in it are models that paved the way for today’s leaders as well as those that could have a significant effect in the future.
Top current LLMs
Below are some of the most relevant large language models today. They do natural language processing and influence the architecture of future models.
BERT
BERT is a family of LLMs that Google introduced in 2018. BERT is a transformer-based model that can convert sequences of data to other sequences of data. BERT’s architecture is a stack of transformer encoders and features 342 million parameters. BERT was pre-trained on a large corpus of data then fine-tuned to perform specific tasks along with natural language inference and sentence text similarity. It was used to improve query understanding in the 2019 iteration of Google search.
Claude
The Claude LLM focuses on constitutional AI, which shapes AI outputs guided by a set of principles that aim to make the AI assistant it powers helpful, harmless and accurate. Claude was created by the company Anthropic. Claude’s latest iterations understand nuance, humor and complex instructions better than earlier versions of the LLM. They also have broad programming capabilities that make them well-suited for application development.
There are three primary branches of Claude — Opus, Haiku and Sonnet. The Claude Sonnet 4 and Claude Opus 4 models debuted in early 2025. Opus 4, the premium model, can perform long-running tasks and agentic workflows. Sonnet 4, the efficiency-focused model, shows continued improvement in coding, reasoning and instruction-following compared to previous iterations. Both models also include:
Extended thinking with tool-use.
Improved memory and instruction-following.
Integrations with IDEs and APIs.
Code execution.
MCP connector.
Files API.
Prompt caching.
In October 2024, Claude added an experimental computer-use AI tool in public beta that enables the LLM to use a computer like a human does. It’s available to developers via the API.
Cohere
Cohere is an enterprise AI platform that provides several LLMs including Command, Rerank and Embed. These LLMs can be custom-trained and fine-tuned to a specific company’s use case. The company that created the Cohere LLM was founded by one of the authors of Attention Is All You Need.
DeepSeek-R1
DeepSeek-R1 is an open-source reasoning model for tasks with complex reasoning, mathematical problem-solving and logical inference. The model uses reinforcement learning techniques to refine its reasoning ability and solve complex problems. DeepSeek-R1 can perform critical problem-solving through self-verification, chain-of-thought reasoning and reflection.
Ernie
Ernie is Baidu’s large language model powering the Ernie chatbot. The bot was released in August 2023 and has garnered more than 45 million users. Near the time of its release, it was rumored to have 10 trillion parameters, which turned out to be an overestimation — later models have parameter counts in the billions. More recent versions of the Ernie chatbot include Ernie 4.5 and Ernie X1. The recent models are based on a mixture-of-experts architecture. Baidu open sourced it’s Ernie 4.5 LLM series in 2025.
Falcon
Falcon is a family of transformer-based models developed by the Technology Innovation Institute. It is open source and has multi-lingual capabilities. Falcon 2 is available in an 11 billion parameter version that provides multimodal capabilities for both text and vision. Falcon 3 is available in several sizes ranging from 1-10 billion parameters.
The Falcon series also includes a pair of larger models with Falcon 40B and Falcon 180B, as well as several specialized models. Falcon models are available on GitHub as well as on cloud providers including Amazon.
Gemini
Gemini is Google’s family of LLMs that power the company’s chatbot of the same name. The model replaced Palm in powering the chatbot, which was rebranded from Bard to Gemini upon the model switch. Gemini models are multimodal, meaning they can handle images, audio and video as well as text. Gemini is also integrated in many Google applications and products. It comes in several sizes — Ultra, Pro, Flash and Nano. Ultra is the largest and most capable model, Pro is the mid-tier model, Flash prioritizes speed for agentic systems and real-time applications, and Nano is the smallest model, designed for efficiency with on-device tasks.
Among the most recent models at the time of this writing is Gemini 2.5 Pro and Gemini 2.5 Flash.
Gemma
Gemma is a family of open-source language models from Google that were trained on the same resources as Gemini. Gemma 2 was released in June 2024 in two sizes — a 9 billion parameter model and a 27 billion parameter model. Gemma 3 was released in March 2025, with 1B, 4B, 12B and 27B versions, and has expanded capabilities. Gemma models can run locally on a personal computer, and are also available in Google Vertex AI.
GPT-3
GPT-3 is OpenAI’s large language model with more than 175 billion parameters, released in 2020. GPT-3 uses a decoder-only transformer architecture. GPT-3 is 10 times larger than its predecessor. GPT-3’s training data includes Common Crawl, WebText2, Books1, Books2 and Wikipedia.
GPT-3 is the last of the GPT series of models in which OpenAI made the precise parameter counts publicly available. The GPT series was first introduced in 2018 with OpenAI’s paper “Improving Language Understanding by Generative Pre-Training.”
GPT-3.5
GPT-3.5 is an upgraded version of GPT-3. It was fine-tuned using reinforcement learning from human feedback. There are several models, with GPT-3.5 Turbo being the most capable, according to OpenAI. GPT-3.5’s training data extends to September 2021.
It was also integrated into the Bing search engine but was replaced with GPT-4.
GPT-4
GPT-4 was released in 2023. Like the others in the OpenAI GPT family, it’s a transformer-based model. Unlike the others, its parameter count has not been released to the public, though there are rumors that the model has more than 1 trillion. OpenAI describes GPT-4 as a multimodal model, meaning it can process and generate both language and images as opposed to being limited to only language.
GPT-4 demonstrated human-level performance in multiple academic exams. At the model’s release, some speculated that GPT-4 came close to artificial general intelligence, which means it is as smart or smarter than a human. That speculation turned out to be unfounded.
GPT-4o
GPT-4 Omni (GPT-4o) is OpenAI’s successor to GPT-4 and offers several improvements over the previous model. GPT-4o creates a more natural human interaction for ChatGPT and is a large multimodal model, accepting various inputs including audio, image and text. The conversations let users engage as they would in a normal human conversation, and the real-time interactivity can also pick up on emotions. GPT-4o can see photos or screens and ask questions about them during interaction.
GPT-4o can respond in 232 milliseconds, similar to human response time and faster than GPT-4 Turbo. The free tier of ChatGPT runs on GPT-4o at the time of this writing.
Granite
The IBM Granite family of models are fully open source under the Apache v.2 license. The first iteration of the open source model models debuted in May 2024, followed by Granite 3.0 in October, Granite 3.1 in December 2024, Granite 3.2 in February 2025 and Granite 3.3 in April 2025.
There are multiple variants in the Granite model family including General-purpose models (8B and 2B variants), guardrail model and Mixture-of-Experts models. While the model can be used for general purpose deployments, IBM itself is focusing deployment and optimization for enterprise use cases like customer service, IT automation and cybersecurity.
Grok
Grok is an LLM from xAI that powers a chatbot of the same name. Grok 3 was released in May 2025. Grok 3 mini is a smaller, more cost-efficient version of Grok 3. The Grok 3 chatbot gives the user two modes that augment the chatbot’s default state — Think mode and DeepSearch mode. In Think mode, Grok uses chain-of-thought reasoning, explaining outputs in step-by-step detail. DeepSearch delves more deeply into internet research to produce an output. Grok performs particularly well — relative to other top models — on reasoning and mathematics benchmarks such as GPQA and AIME. Grok 3 is closed source and written in primarily Rust and Python.
Grok’s training infrastructure is composrd of the Colossus supercomputer, which contains more than 100,000 GPUs from Nvidia. The supercomputer was built in a repurposed Electrolux factory near Memphis, Tenn. xAI and Colossus have drawn criticism from residents and activists for a lack of transparency surrounding the environmental effects of the facility’s emissions.
The name Grok comes from Robert Heinlein’s 1961 novel, Stranger in a Strange Land. The book coined the term to describe the ability to understand something deeply.
Lamda
Lamda (Language Model for Dialogue Applications) is a family of LLMs developed by Google Brain in 2021. Lamda used a decoder-only transformer language model and was pre-trained on a large corpus of text. In 2022, Lambda gained widespread attention when then-Google engineer Blake Lemoine went public with claims that the program was sentient.
Llama
Large Language Model Meta AI (Llama) is Meta’s LLM which was first released in 2023. The Llama 3.1 models were released in July 2024, including both a 405 billion and 70 billion parameter model.
The most recent version is Llama 4, which was released in April 2025. There are three main models — Llama 4 Scout, Llama 4 Maverick and Llama 4 Behemoth. Behemoth is only available for preview at the time of this writing. Llama 4 is the first iteration of the Llama family to use a mixture-of-experts architecture.
Previous iterations of Llama used a transformer architecture and were trained on a variety of public data sources, including webpages from CommonCrawl, GitHub, Wikipedia and Project Gutenberg. Earlier versions of Llama were effectively leaked and spawned many descendants, including Vicuna and Orca. Llama is available under an open license, allowing for free use of the models. Lllama models are available in many locations including llama.com and Hugging Face.
Mistral
Mistral is a family of mixture-of-experts models from Mistral AI. Mistral Large 2 was first released in July 2024. The model operates with 123 billion parameters and a 128k context window, supporting dozens of languages including French, German, Spanish, Italian and many others, along with more than 80 coding languages. In November 2024, Mistral released Pixtral Large, a 124-billion-parameter multimodal model that can handle text and visual data. Mistral Medium 3 was released in May 2025, which is touted as their “frontier-class multimodal model”.
Mistral models are available via Mistral’s API to those with a Mistral billing account.
o1
The OpenAI o1 model family was first introduced in Sept. 2024. The o1 model’s focus is to provide what OpenAI refers to as – reasoning models, that can reason through a problem or query before offering a response.
The o1 models excel in STEM fields, with strong results in mathematical reasoning (scoring 83% on the International Mathematics Olympiad compared to GPT-4o’s 13%), code generation and scientific research tasks. While they offer enhanced reasoning and improved safety features, they operate more slowly than previous models due to their thorough reasoning processes and come with certain limitations, such as restricted access features and higher API costs. The models are available to ChatGPT Plus and Team users, with varying access levels for different user categories.
o3
OpenAI introduced the successor model, o3, in December 2024. According to OpenAI, o3 is designed to handle tasks with more analytical thinking, problem-solving and complex reasoning and will improve o1’s capabilities and performance. The o3 model became available to the public in June 2025.
o4-mini
Like others in the o-series, o4-mini is a reasoning model that aims to excel at tasks that require complex reasoning and problem-solving. OpenAI claims that o-4 mini is superior to o3-mini across all key benchmarks. It comes in o-4-mini and o4-mini-high, which uses more extensive reasoning for complex problems. Just like other mini variants from OpenAI, it is designed to be especially cost-efficient. The model also uses a technique called deliberative alignment, which aims to identify attempts to exploit the system and create unsafe content.
Orca
Orca is an LLM developed by Microsoft that has 13 billion parameters. It aims to improve on advancements made by other models by imitating the reasoning procedures achieved by LLMs. The research surrounding Orca involved teaching smaller models to reason the same way larger models do. Orca 2 was built on top of the 7 billion and 13 billion parameter versions of Llama 2.
Palm
The Pathways Language Model is a 540 billion parameter transformer-based model from Google powering its AI chatbot Bard. It was trained across multiple TPU 4 Pods — Google’s custom hardware for machine learning. Palm specializes in reasoning tasks such as coding, math, classification and question answering. Palm also excels at decomposing complex tasks into simpler subtasks.
Palm gets its name from a Google research initiative to build Pathways, aiming to create a single model that serves as a foundation for multiple use cases. In October 2024, the Palm API was deprecated, and users were encouraged to migrate to Gemini.
Phi
Phi is a transformer-based language model from Microsoft. The Phi 3.5 models were first released in August 2024. Phi-4 models were released late 2024 and early 2025. The series includes the base model, Phi-4-reasoning, Phi-4-reasoning-plus, Phi-4-mini-reasoning and Phi-4-mini-instruct.
Released under a Microsoft-branded MIT License, they are available for developers to download, use, and modify without restrictions, including for commercial purposes.
Qwen
Qwen is large family of open models developed by Chinese internet giant Alibaba Cloud. The newest set of models are the Qwen 3 suite, which was pre-trained on almost twice the number of tokens that its predecessor was trained on. These models are suitable for a wide range of tasks, including code generation, structured data understanding, mathematical problem-solving as well as general language understanding and generation.
StableLM
StableLM is a series of open language models developed by Stability AI, the company behind image generator Stable Diffusion.
StableLM 2 debuted in January 2024 initially with a 1.6 billion parameter model. In April 2024 that was expanded to also include a 12 billion parameter model. StableLM 2 supports seven languages: English, Spanish, German, Italian, French, Portuguese, and Dutch. Stability AI positions these models as offering different options for various use cases, with the 1.6B model suitable for specific, narrow tasks and faster processing while the 12B model provides more capability but requires more computational resources.
Tülu 3
Allen Institute for AI’s Tülu 3 is an open-source 405 billion-parameter LLM. The Tülu 3 405B model has post-training methods that combine supervised fine-tuning and reinforcement learning at a larger scale. Tülu 3 uses a “reinforcement learning from verifiable rewards” framework for fine-tuning tasks with verifiable outcomes — such as solving mathematical problems and following instructions.
Vicuna 33B
Vicuna is another influential open source LLM derived from Llama. It was developed by LMSYS and was fine-tuned using data from sharegpt.com. It is smaller and less capable that GPT-4 according to several benchmarks but does well for a model of its size. Vicuna has only 33 billion parameters.
china hgh kaufen
References:
https://konfisol.de/companies/wachstumshormon-hgh-laborwert/
dianabol cycle results
https://md.darmstadt.ccc.de/aSmtsFhdR7qp589n9MTf9w/ injectable dianabol cycle
https://pattern-wiki.win/wiki/Dianabol_Unwanted_Facet_Effects_Results_After_Submit_Cycle_Remedy_Pct sustanon and dianabol cycle
http://celebratebro.in/birthdays-in-bangalore/index.php?qa=user&qa_1=enemyhelen9 valley.Md
https://wikimapia.org/external_link?url=https://www.valley.md/dianabol-cycle-benefits-and-risks valley.md
http://xn—-8sbec1b1ad1ae2f.xn--90ais/user/deadcow5/ dianabol and winstrol cycle
http://blog.staugustineakoka.org/forums/users/napkinpants89/ dianabol cycles
https://king-wifi.win/wiki/Dianabol_Testosterone_Beginner_Stack dianabol and testosterone cycle for beginners
https://mes-favoris.top/item/404193 dianabol cycles
https://www.metooo.it/u/68972676a0f9316c08a65414 Dianabol cycle results
https://schwanger.mamaundbaby.com/user/walletfly1 Sustanon Deca Dianabol
Cycle
https://www.easyfie.com/growthhead3/profile dianabol 10mg Cycle
https://firsturl.de/9295xXu Test e and Dianabol cycle
https://www.askocloud.com/index.php/user/sortearth9 dianabol
cycle guide
http://king-wifi.win//index.php?title=ritchieballing2417 test e and dianabol cycle
https://a-taxi.com.ua/user/yakatom2/ injectable dianabol cycle
https://algowiki.win/wiki/Post:What_Is_Dianabol_Dbol_Cycles_Gains valley.md
https://funsilo.date/wiki/Anabolic_Steroids_An_Summary 10mg dianabol cycle
https://viewcinema.ru/user/dadmaraca6/ valley.md
References:
what to take with dianabol cycle (https://play.ntop.tv)
deca dianabol cycle
References:
Testosterone Trenbolone dianabol cycle
deca and dianabol cycle
https://www.giveawayoftheday.com/forums/profile/1133033 Valley.md
https://korsholm-waters-2.technetbloggers.de/winstrol-and-anavar-cycle-optimum-dosage-stack-and-outcomes how to take dianabol first cycle
https://kurilka-wagon.ru/user/bikeheaven4/ test e and dianabol cycle
http://qa.holoo.co.ir/user/fieldtower0 injectable dianabol cycle
https://atomcraft.ru/user/cherrymask59/ test and dianabol cycle
http://cqr3d.ru/user/parcelfarmer2/ dianabol and testosterone cycle
https://jones-smith-2.thoughtlanes.net/the-final-word-guide-to-dianabol-risks-advantages-and-cycles Dianabol
10mg cycle
https://more-ruserialov.net/user/honeymaple2/ valley.Md
https://monjournal.space/item/445750 Valley.Md
https://newsagg.site/item/444007 dianabol 6
Week cycle
http://giscience.sakura.ne.jp/pukiwiki/index.php?kvistnieves277000 how to take dianabol
first Cycle
http://giscience.sakura.ne.jp/pukiwiki/index.php?aggerlevine092932 4 week dianabol
Cycle
https://www.metooo.io/u/68a455af2a475666db7e29d6 test Dianabol Cycle
https://www.udrpsearch.com/user/ramieowl48 dianabol cycle
dosage
https://www.metooo.it/u/68a42e4fa6bcea39bf19bd38 valley.md
https://motionentrance.edu.np/profile/garagesweets50/ valley.md
https://230tue.anidub.bet/user/burstbengal2/ Testosterone And Dianabol Cycle
https://my.vipaist.ru/user/atticpair8/ Valley.Md
hgh results before and after
https://tayseerconsultants.com/employer/wachstumshormon-hgh-and-peptide-kaufen-sie-legale-hgh-in-deutschland/ 4 Iu Hgh
https://careervault.horizonbeam.com/employer/hgh-human-growth-hormone-hormone-in-vials-astera-labs-kaufen-ohne-rezept-in-%C3%96sterreich hgh wieviel einheiten
https://scholarships4u.org/employer/offizieller-eu-shop-anabole-steroide-on-line-kaufen/ hgh 2iu
per day
https://docentesdeingles.ec/employer/hgh-wirkung-risiken-und-einsatz-von-somatropin/ sytropin hgh spray
https://www.complete-jobs.co.uk/employer/wachstumshormone-hgh-kaufen-legal-in-deutschland-rezeptfrei how much
is a cycle of hgh
https://linksmy.cc/lashaymars 1 Iu Hgh Per
Day Results
https://www.postajob.in/employer/hgh-kaufen-authorized-und-sicher-so-gehts/ best
hgh dose for fat loss
https://pattondemos.com/employer/prohormone-kaufen/ genfx Hgh
https://cz-link.click/willianfilson wehrle
https://jobsee.site/employer/wachstumshormone-hgh-online-kaufen-bestellen-one-hundred-pc-echte-steroide/ hgh frauen bodybuilding
https://vcards.fennec-vision.com/sadyefitzgibbo wehrle
https://lavorare.eu/companies/wachstumshormone-hgh-somatropin-kaufen/ hgh
long term side effects
https://khongdaidong.click/bcrphilipp0821 How Many Iu Are In 1 Mg Of Hgh
https://careers.primarycare24.org.uk/employer/epithalon-10-mg-epithalamine-peptide-kaufen-im-deutschland/ hgh dosage for men
https://minify.biz/cassiemcchesne dwayne johnson hgh
https://yaseend6.sg-host.com/armandrice5112 hgh dosage for bodybuilding
https://tinypath.site/cedrictjangama are hgh injections safe
https://www.bcard.qa/britt53b042841 Hgh Steroid Cycle
hgh dosage
References:
how much hgh to take (https://www.philresmandaue.com)
bulking steroid cycles
References:
get Fit supplements, git.pcgf.io,