https://stionicgeodist.com/itcjczROp2HgIzm/101939

Mint Primer | The search for an engine: Should Google worry?

Google became one of the world’s largest companies by building the world’s most popular internet search engine. But the advent of AI has spawned an army of rivals ready to upstage its monopoly. Now, with OpenAI showcasing SearchGPT, should Google be scared?

Is SearchGPT really novel?

Not really. Microsoft’s Bing—incidentally powered by OpenAI’s generative pre-trained transformer (GPT) foundational AI models—has already showcased the practical uses of a generative AI-driven search and browsing experience. SearchGPT remains within that ambit, but seeks to improve it by ensuring that the AI search algorithm remembers queries for follow-ups. As a result, SearchGPT has so far been advertised as an early-stage experiment to see how generative AI might be fitted into commercial search products. This would be key to see how the future of search can be monetized by Big Tech.

 

Can Google keep pace?

Well before SearchGPT, Google had unveiled a Search Generative Experience as an internal test product. And in May, it expanded the scope of new features on AI-powered search that uses its latest AI model—Gemini. The essence of Google’s AI-powered search experience is the same as that of SearchGPT. However, the key difference is that while Google has dominated the search space so far, competitors with similar interfaces and algorithmic prowess could out-muscle Google in an industry that it monopolizes globally. To be sure, neither Google’s nor OpenAI’s new search platform is openly available yet.

 

Who are the other competitors?

Bing is perhaps the best known among Google and OpenAI competitors. Another is the startup Perplexity AI, backed by Nvidia and Jeff Bezos, among others. Smaller, independent competitors include privacy-centred browser Brave’s AI search feature, You.com’s AI search, Komo, Phind and Waldo. But none have the deep pockets of Google, Microsoft and OpenAI.

Can this change how we use the internet?

Yes. A big change will come in the way search and targeted ads work. Today, search engine service providers track internet activity and serve ads based on your usage. In return, they earn commissions from advertisers. In a chatbot-like platform, this changes due to the interface design. This could make a huge impact: over half of Google parent Alphabet’s annual revenue each year comes from Search. For users, the big change could be finding new websites—AI chat interfaces could more closely control information sources.

Why should Google be worried?

Between April and June, Google earned $48.5 billion from its search business. Safe to say, much of its core business is dependent on search. The AI search race could be won by whoever has the better, more powerful AI model. OpenAI’s GPT-4o, Meta’s Llama 3.1 and Anthropic’s Claude 3.5 Sonnet are in line to challenge and outperform Google’s Gemini 1.5 Pro in search today. Google has user stickiness and repute on its side—while OpenAI is new, it’s unlikely to oust Google, which has 30 years of dominance, overnight.

Source link

What could kill the $1trn artificial-intelligence boom?

Mr Pichai is not alone. New Street Research, a firm of analysts, estimates that Alphabet, Amazon, Meta and Microsoft will together splurge $104bn on building AI data centres this year. Add in spending by smaller tech firms and other industries and the total AI data-centre binge between 2023 and 2027 could reach $1.4trn.

The scale of this investment, and uncertainty over if and when it will pay off, is giving shareholders the jitters. The day after Alphabet’s results the Nasdaq, a tech-heavy index,fell by 4%,the biggest one-day drop since October 2022. This week analystswill pore over the quarterly results of Amazon and Microsoft,the world’s two biggest cloud companies,for clues as to how their AI businesses are faring.

For now, the tech giants show little inclination to pare back their investments, as Mr Pichai’s remarks show. That is good news for the myriad suppliers that are benefiting from the boom. Nvidia, a maker of AI chips that in June briefly became the world’s most valuable company, has grabbed most of the headlines. But the AI supply chain is far more sprawling. It spans hundreds of firms, from Taiwanese server manufacturers and Swiss engineering outfits to American power utilities. Many have seen a surge in demand since the launch of ChatGPT in 2022, and are themselves investing accordingly. In time, supply bottlenecks or waning demand could leave them over-extended.

Graphic: The Economist

View Full Image

Graphic: The Economist

AI investment can broadly be split into two. Half of it goes to chipmakers, with Nvidia the main beneficiary. The rest is spent on makers of equipment that keeps the chips whirring, ranging from networking gear to cooling systems. To assess the goings-on along the ai supply chain, The Economist has examined a basket of 60-odd such companies. Since the start of 2023 the mean share price of firms in our universe has risen by 106%, compared with a 42% increase in the s&p 500 index of American stocks (see chart). Over that time their expected sales for 2025 climbed by 14%, on average. That compares with a 1% increase across non-financial firms, excluding tech companies, in the S&P 500.

The biggest gainers were chipmakers and server manufacturers (see chart). Nvidia accounted for almost a third of the rise in the group’s expected sales. It is forecast to sell $105bn of AI chips and related equipment this year, up from $48bn in its latest fiscal year. AMD, its nearest rival, will probably sell about $12bn of data-centre chips this year, up from $7bn. In June Broadcom, another chipmaker, said that its quarterly AI revenues jumped by 280%, year on year, to $3.1bn. It helps customers, including cloud providers, design their own chips, and also sells networking equipment. Two weeks later Micron, a maker of memory chips, said its data-centre revenues had also jumped, thanks to soaring AI demand.

Graphic: The Economist

View Full Image

Graphic: The Economist

Companies that make servers are also raking it in. Both Dell and Hewlett Packard Enterprise (HPE) said in their most recent earnings calls that sales of AI servers doubled in the past quarter. Foxconn, a Taiwanese manufacturer that assembles lots of Apple’s iPhones, also has a server business. In May it said its AI sales had tripled over the past year.

Other firms are seeing interest spike, even if new sales have not yet materialised. Eaton, an American maker of industrial machinery, said that in the past year it saw more than a four-fold increase in customer enquiries related to its AI data-centre products. AI servers can require up to ten times more power than conventional ones. Earl Austin junior, the boss of Quanta Services, a firm that builds renewable-power and transmission equipment, recently admitted that the surge in demand for its data-centre business had “caught me off guard a little bit”. Vertiv, which sells cooling systems used in data centres, noted in April that its pipeline of AI projects more than doubled within two months.

All this interest is setting off a further frenzy of investment. This year around two-thirds of firms in our sample are expected to raise their capital expenditure, relative to sales, above their five-year averages. Many companies are building new factories. They include Wiwynn, a Taiwanese server-maker, Supermicro, an American one, and Lumentum, an American seller of advanced networking cables. Many are also spending more on research and development.

Some companies are investing through acquisitions. This month AMD said it was buying Silo AI, a startup, to boost its AI capabilities. In January HPE announced that it would spend $14bn to buy Juniper Networks, a networking firm. In December Vertiv announced its purchase of CoolTera, a liquid-cooling specialist. The firm hopes this will help it scale up its production of liquid-cooling technology 40-fold.

Just as the spending ramps up, though, the threats to the ai supply chain are building. One problem is its heavy reliance on Nvidia. Baron Fung, of Dell’Oro Group, a research firm, notes that when Nvidia went from launching a new chip every two years to every year, the entire supply chain had to scramble to build new production lines and meet accelerated timelines. Future sales for lots of firms in the AI supply chain are predicated on keeping the world’s most valuable chipmaker happy.

Another threat stems from supply bottlenecks, most notably in the availability of power. An analysis by Bernstein, a broker, looks at a scenario in which by 2030 AI tools are used roughly as much as Google search is today. That would raise the growth in power demand in America to 7% a year, from 0.2% between 2010 and 2022. It would be hard to build that much power capacity swiftly. Stephen Byrd of Morgan Stanley, a bank, notes that in California, where many AI data centres could be built, it takes six to ten years to get connected to the grid.

Some companies are already trying to fill the gaps by providing off-grid power. In March Talen Energy, a power company, sold Amazon a data centre connected to a nuclear-power plant for $650m. CoreWeave, a small AI cloud provider, recently struck a deal with Bloom Energy, a fuel-cell maker, to produce on-site power. Others are repurposing sites such as bitcoin-mining locations that already have grid access and power infrastructure. Still, the energy needs for AI are so vast that the risk of a power shortage limiting activity remains.

The biggest threat to the AI supply chain would come from waning demand. In June Goldman Sachs, a bank, and Sequoia, a venture-capital firm, published reports questioning the benefits of current generative-AI tools, and—by extension—the wisdom of the cloud-computing giants’ spending bonanza. If AI profits remain elusive, the tech giants could cut capital spending, leaving the supply chain exposed.

The build-out of factories has brought higher fixed costs. Across our sample of firms the median spending on property, plants and equipment is expected to jump by 14% between 2023 and 2025. Some investments may start to look suspect if demand is slow to materialise. The price tag on HPE’s purchase of Juniper Networks was two-thirds of the acquirer’s market value when it was announced in January.

Even after the wobbles of last week, market expectations remain bullish. For our sample of firms the median price-to-earnings ratio, a measure of how investors value profits, has climbed by nine percentage points since the start of 2023. If such expectations are to be met, AI tools need to improve quickly, and businesses need to adopt them en masse. For the many companies along the AI supply chain, the stakes are getting uncomfortably high.

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Source link

Why OpenAI-Google battle is not just about search. It’s also about building the most powerful AI

While this is the obvious part, beneath the surface, the bigger fight is also about controlling all streams of user data, including those from search engines and social media, which can help big tech companies such as Google, OpenAI, Microsoft, Meta, Nvidia and Elon Musk’s xAI build the world’s most powerful artificial intelligence (AI) model.

ChatGPT managed to garner more than 100 million users in just the first two months of its launch in December 2023, prompting many to dub it a search-engine killer. The reason was that ChatGPT allows us to write poems, articles, tweets, books, and even code like humans and is interactive, while search engines passively provide article links. Microsoft, which has a stake in OpenAI, even integrated ChatGPT with its own search engine, Bing. At that time, though, ChatGPT was still being tested and lacked knowledge of current events, having trained on data only till the end of 2021.

From September 2023, ChatGPT began accessing the internet, thus providing up-to-date information. But it started facing allegations of “verbatim”, “paraphrase”, and “idea” plagiarism and copyright violations from publishers around the world. Late last year, for instance, The New York Times initiated legal proceedings against Microsoft and OpenAI, alleging unauthorized “copying and using millions of its articles”. OpenAI did give publishers the option to block bots from crawling their content but separating AI bots from those originating from search engines such as Google or Microsoft’s Bing, which facilitate page indexing and visibility in search outcomes, is easier said than done.

OpenAI’s SearchGPT prototype, which is currently available for testing, will not only access the web but also provide “clear links to relevant sources”, the company said in a blog post on 26 July. This implies that more than targeting Google’s search engine, OpenAI appears to be trying to pacify and rebuild rapport with publishers it has antagonised. And this time around, OpenAI is “…also launching a way for publishers to manage how they appear in SearchGPT, so publishers have more choices”.

It clarifies that SearchGPT is about search and “separate from training OpenAI’s generative AI foundation models”. It adds that the search results will show sites even if they opt out of generative AI training. OpenAI explains that a webmaster can allow its “OAI-SearchBot to appear in search results while disallowing GPTbot to indicate that crawled content should not be used for training OpenAI’s generative AI foundation models”.

Equations are changing, but slowly

To be sure, ChatGPT’s success is already making a dent in Google’s worldwide lead, which makes most of its revenue from advertising. For instance, Google saw its smallest search market share on desktops registered in more than a decade. Microsoft’s Bing, which supported and integrated ChatGPT into its service, surpassed 10% of the market share on desktop devices, according to Statista.

Google, whose advertising search revenue was $279.3 billion in 2023, is taking a hit, with many users already preferring Generative AI (GenAI) for searching online information first. “Many companies heard the call and saw $13 billion invested in generative AI (GenAI) for broad usage, namely search engines and large language models (LLMs), in 2023,” according to Statista.

Yet, Google, according to Statista, continues to control more than 90% of the search-engine market worldwide across all devices, handling over 60% of all search queries in the US alone and generating over $206.5 billion in ad revenues from its search engine and YouTube. In India, too, the search-engine giant has a market share of over 92%, but in countries like Germany and France, though, online users are increasingly choosing “privacy- or sustainability-focused alternatives such as DuckDuckGo or Ecosia”, according to Statista. China, on its part, has Baidu, while South Korea favours Naver; even Russia’s Yandex now has the third-largest market share among search engines worldwide.

ChatGPT certainly did not topple Google, agrees Dan Faggella, founder of market research firm Emerj Artificial Intelligence Research. “But it (OpenAI) definitely was seemingly their strongest real competitor,” he adds. “I’m much more nervous for Perplexity in, say, the next three months than I am about Google,” says Fagella, for the lack of a “differentiator”.

“I think it’s a cool app. But I wonder if there’s enough of a context wrap for things like enterprise search. Google used to do enterprise search but no longer sees sense in it,” he adds. Perplexity, which has raised $100 million from the likes of Amazon founder Jeff Bezos and Nvidia, was valued at $520 million in its last funding round.

In a February interview with Mint, Srinivas argued that while Google will continue to have a “90-94% market share”, they will lose “a lot of the high-value traffic—from people who live in high-GDP countries and earning a lot of money, and those who value their time and are willing to pay for a service that helps them with the task”. He argued that over time, “the high-value traffic will slowly go elsewhere”, while low-value “navigational traffic” will remain on Google, making Google “a legacy platform that supports a lot of navigation services”.

“The bigger consideration is that the means and interfaces through which search occurs are evolving. These may become new interfaces other than the Chrome tab, where Google can very much get pushed aside, and I think the VR (virtual reality) ecosystem will be part of that as well. I don’t see Google dying tomorrow. But I think they should be shaking in their boots a little bit around what the future of search will be,” says Fagella.

Race to dominate the AI space

Fagella believes that “search is a subset of a much broader substrate monopoly game. It’s all about owning the streams of attention and activity—from personal and business users for things like their workflows, personal lives and conversations to help them (big tech companies) build the most powerful AI”. This, he explains, is why all big companies want you to have their chat assistant so that they can continue to economically dominate.

Fagella believes that all the moves indicate that the big tech companies, including Google, Meta, and OpenAI, “are ardently moving towards artificial general intelligence (AGI). “Apple’s a little quieter about it. I don’t know where Tim Cook stands. They’re always a little bit more standoffish. But suffice it to say, they’re probably in that same running as well, although seemingly not as overt about it,” he adds.

OpenAI, for instance, has multimodal GenAI models, including GPT-4o and GPT-4 Turbo, while Google’s Gemini 1.5 Flash is available for free in more than 40 languages. Meta recently released Llama 3.1 with 405 billion parameters, which is the largest open model to date, and Mistral Large 2 is a 128 billion-parameter multilingual LLM. Big tech companies are also marching ahead on the path to achieve AGI, which envisages AI systems that are smarter than humans.

OpenAI argues that because “…the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right…We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad and for AGI to be an amplifier of humanity”.

And OpenAI does not mind spending a lot of money to pursue this goal. The ChatGPT maker could lose as much as $5 billion this year, according to an analysis by The Information. However, in a conversation this May with Stanford adjunct lecturer Ravi Belani, Sam Altman said, “Whether we burn $500 million a year, or $5 billion or $50 billion a year, I don’t care. I genuinely don’t (care) as long as we can, I think, stay on a trajectory where eventually we create way more value for society than that, and as long as we can figure out a way to pay the bills like we’re making AGI it’s going to be expensive it’s totally worth it,” he added.

In July, Google DeepMind proposed six levels of AGI “based on depth (performance) and breadth (generality) of capabilities”. While the ‘0’ level is no AGI, the other five levels of AGI performance are: Emerging, competent, expert, virtuoso and superhuman. Meta, too, says it’s long-term vision is to build AGI that is “open and built responsibly so that it can be widely available for everyone to benefit from”. Meanwhile, it plans to grow its AI infrastructure by the end of this year with two 24,000 graphics processing unit (GPU) clusters using its in-house designed Grand Teton open GPU hardware platform.

Elon Musk’s xAI company, too, has unveiled the Memphis Supercluster, underscoring the partnership between xAI, X and Nvidia, while firming up his plans to build a massive supercomputer and “create the world’s most powerful AI”. Musk aims to have this supercomputer—which will integrate 100,000 ‘Hopper’ H100 Nvidia graphics processing units (and not Nvidia’s H200 chips or its upcoming Blackwell-based B100 and B200 GPUs)—up and running by the fall of 2025.

What can spoil the party

No AI model to date can be said to have powers of reasoning and feelings as humans do. Even Google DeepMind underscores that other than the ‘Emerging’ level, the other four AGI levels are yet to be achieved. LLMs, too, remain highly advanced next-word prediction machines and still hallucinate a lot, prompting sceptics like Gary Marcus, professor emeritus of psychology and neural science at New York University, to predict that the GenAI “…bubble will begin to burst within the next 12 months”, leading to an “AI winter of sorts”.

“My strong intuition, having studied neural networks for over 30 years (they were part of his dissertation) and LLMs since 2019, is that LLMs are simply never going to work reliably, at least not in the general form that so many people last year seemed to be hoping. Perhaps the deepest problem is that LLMs literally can’t sanity-check their own work,” says Marcus.

I elaborated on these points in my 19 July newsletter, Misplaced enthusiasm over AI Appreciation Day. When will AI, GenAI provide RoI?, where Daron Acemoglu, institute professor at the Massachusetts Institute of Technology (MIT), argues that while GenAI “is a true human invention” and should be “celebrated”, “too much optimism and hype may lead to the premature use of technologies that are not yet ready for prime time”. His interview was published in a recent report, Gen AI: too much spend, too little benefit?, by Goldman Sachs.

There’s also the fear that all big AI models will eventually run out of finite data sources like Common Crawl, Wikipedia and even YouTube to train their AI models. However, a report in The New York Times said many of the “most important web sources used for training AI models have restricted the use of their data”, citing a study published by the Data Provenance Initiative, an MIT-led research group.

“Indeed, there is only so much Wikipedia to vacuum up. It takes billions of dollars to train this thing, and you’re going to suck that up pretty quickly. You’re also going to start sucking up all the videos pretty quickly, despite how quickly we can pump them in,” Fagella agrees.

He believes that the future of AI development will involve integrating sensory data from real-world interactions, such as through cameras, audio, infrared, and tactile inputs, along with robotics. This transition will enable AI models to gain a deeper understanding of the physical world, enhancing their capabilities beyond what is possible with current data.

Fagella points out that the competition for real-world data and the strategic deployment of AI in robotics and life sciences will shape the future economy, with major corporations investing heavily in AI infrastructure and data acquisition, even as data privacy and security will remain critical issues. He concludes, “The inevitable transition is to be touching the world.”

Source link

It’s swallowed billions of dollars, but has AI lived up to the hype?

Since AI’s most popular offering, OpenAI’s ChatGPT, debuted two years back and made esoteric AI tech accessible to the masses, there has been excitement over intelligent machines taking over mundane tasks or assisting humans in complex work. Geeks declared that costs would drop and productivity skyrocket, eventually leading to ‘artificial general intelligence’, when machines would run the world.

Huge sums were poured into companies focused on building AI solutions. In 2023, venture capital investments into Generative AI (a subset of AI to create text, images, video) startups totalled $21.3 billion, growing three-fold from $7.1 billion in 2022, according to consultancy EY.

But AI is a cash guzzler—Microsoft, Meta and Alphabet invested $32 billion in the first quarter of 2024 in AI development. The billions that were invested have been spent on expensive hardware, software and power-hungry data centres, totting up Big Tech valuations, but without real benefits.

Enterprises, meanwhile, have been waiting on the sidelines for the most part. With little return on investment (RoI) expected in the foreseeable future, they have been hesitant to deploy or depend entirely on AI. They also have doubts about the accuracy of AI generated results, aside from concerns over data privacy and governance.

So, while huge sums of money have been invested in AI, the rate of adoption has been slow, costs (of access) are very high, and the output is not reliable. For all the money that has been spent, AI should be able to solve complex tasks. But the only visible beneficiaries are the few big companies with a stake in AI, such as AI chipmaker Nvidia, which saw its market value jump by over $2 trillion in under two years as investors picked the stock anticipating a disruptive change. But what happened on 24 July shows that investors are running out of patience.

Inflated expectations

Goldman Sachs forecasts there will be expenditure of $1 trillion over the next few years to develop AI infrastructure.

View Full Image

Goldman Sachs forecasts there will be expenditure of $1 trillion over the next few years to develop AI infrastructure.

Last month, Wall Street investment bank Goldman Sachs released a 31-page report on AI, questioning its benefits. Titled‘GenAI: Too much spend, too little benefit’ the report points out that AI’s impact on productivity and economic returns may have been overestimated. Jim Covello, head of global equity research, Goldman Sachs, asked, “What $1 trillion problem will AI solve?”

The venerable investment bank forecasts there will be expenditure of $1 trillion over the next few years to develop AI infrastructure but casts doubts over returns or breakthrough applications. In fact, the report warns that if significant AI applications fail to materialize in the next 12-18 months, investor enthusiasm may wane.

The flow of funds is already thinning, particularly in early-stage AI ventures. While investments in AI startups surged in 2023, the first quarter of 2024 saw just $3 billion invested globally, according to the EY report. The consultancy projects total global investment to be in the region of$12 billion in 2024, a little over half the level in 2023.

“GenAI was crowned very quickly to be the best new thing to have happened since sliced bread,” said Archana Jahagirdar, founder and managing partner, Rukam Capital, a Delhi-based early-stage investor which has backed three AI ventures—unScript.ai, Beatoven.ai and upliance.ai. “Now, there’s a realization that GenAI tech is exciting, but monetizable use cases are yet to emerge.”

Daron Acemoglu, institute professor at MIT, noted in the Goldman Sachs report that “truly transformative changes won’t happen quickly. Only a quarter of AI exposed tasks will be cost effective to automate in the next 10 years”.

Indeed, technology research and consulting firm Gartner, which popularized the concept of the new-technology hype cycle, says that Generative AI has passed the peak of inflated expectations (marked by overenthusiasm and unrealistic projections) and is entering the trough of disillusionment.

Poor RoI

“The RoI (return on investment) is not in tune with the high capex on AI. At the heart of GenAI is the ability to summarize, synthesize and create content. People are using ChatGPT, like they use Google search,” said Arjun Rao, partner, Speciale Invest, a venture capital firm.

Comparisons with another disruptive technology, the internet, are inevitable. The internet impacted every area of work, business, the economy, and society with tangible benefits—banks could expand without opening branches, or online retail could reach anyone without investing in physical stores. The internet led to the global IT services boom, as work could be sent online to tap affordable resources. This resulted in a $250 billion industry in India employing nearly five million. The internet offered cost effective and efficient alternatives. In contrast, AI will likely be replacing low-wage jobs with expensive technologies and lack of reliability, as of now.

“Unless there is RoI, companies will not invest. But we believe every business will be an AI business in future. Voice assistants are improving, and can also analyze conversations at scale. We do see adoption going up,” said Ganesh Gopalan, chief executive and co-founder, Gnani.ai. Set up by a group of former Texas Instruments engineers, Gnani.ai is a conversational AI platform backed by Samsung Ventures.

To be fair, technology disruptions are not easy and geeks tend to oversell ideas saying they will change the world. “A lot of people will lose money before they start making money,” Nishit Garg, partner, RTP Global Asia, an early-stage venture capital firm, toldMint. “This happens with every disruption we have seen, in cloud, internet and e-commerce. AI is going to raise the intelligence level of every organization. But before that happens it has to be affordable to use and error free.” RTP Global has invested in a few AI-led ventures, in areas such as market automation and drug development.

The internet, cloud, smartphones went through that hype cycle of lofty promises but eventually did improve and changed the way we work. Proponents argue that it takes a lot of money to set up infrastructure. For instance, it took billions of dollars to set up mobile networks before calls could be made.

Repeating history?

Back in 1905, Spanish-American philosopher George Santayana wrote: “Those who cannot remember the past are condemned to repeat it”. Geeks fervently believe that the next big tech idea will change the world. But history shows that many of the tech ideas that lured investors and enterprises like moths to light were either ahead of their time or just plain wrong.

For instance, after companies poured billions into solving the Y2K problem, the dotcom bubble started taking shape. Fuelled by investments in internet-based companies in the late 1990s, the value of equity markets grew exponentially during the dotcom bubble, with the Nasdaq rising from under 1,000 to more than 5,000 between 1995 and 2000. Everyone from autoparts sellers to the neighbourhood bakery were sold the idea that if they weren’t online they were doomed.

By the end of 2001, reality set in—companies were online but there were no users. TheNasdaq composite stock market index, which had risen almost 800% in just a few years, crashed from its peak by October 2002, giving up all its gains as the bubble burst.

More recent examples are the metaverse and non fungible tokens (NFTs). The metaverse was a vision that people flock to the 3D virtual web via their avatars. Analysts projected that the market would be worth over $1 trillion in a decade. NFTs started selling with eyepopping valuations. Both were swept away as AI mania took over and were clearly ahead of their time.

Still early days

For all its niggles, AI is a more fundamental technology shift than the metaverse or NFTs. But if it was having a meaningful impact, more people, at least in developed economies, would have been willing to pay to use ‘reliable’ premium services. But that is not quite the case. Open AI’s ChatGPT has around 180 million daily active users worldwide, but less than 5% (less than 9 million) pay to use it. And across companies, the use of AI varies, with digital startups using it more than traditional companies.

Sam Altman, chief executive officer, Open AI.

View Full Image

Sam Altman, chief executive officer, Open AI. (AFP)

“From a tech evolution standpoint, we are at the infrastructure buildout phase,” said Namit Chugh, principal W Health Ventures, a healthcare focused venture investor. “The middleware, services layer, applications layer will come on top of that. That’s when companies can start monetizing. The problem is AI infrastructure is very expensive to build.” W Health Ventures has invested in AI-focused startups such as Wysa, an AI assistant for people who need mental health support.

“There is a lot of FOMO—fear of missing out—ensuring that enterprises have an AI strategy. But at 60-65% accuracy AI won’t be good. This has to improve,” said RTP Global’s Garg.

“If you ignore AI you will be out of business. Ventures like Uber, Netflix, Amazon, Airbnb disrupted the market. If they don’t adapt with AI they will be dinosaurs. The problem is, a lot of people do not understand this animal,” said Arnab Basu, partner and leader, advisory, PwC India.

There is a lot of FOMO ensuring that enterprises have an AI strategy. But at 60-65% accuracy, AI won’t be good.
—Nishit Garg

The India reality

“India’s ambition is to…become one of the top three global economies in terms of GDP,” Rajnil Malik, partner and GenAI go-to-market leader, PwC India, said. AI services will play a big role in this. RoI is not evident yet, but building blocks are being put in place. Platforms like Uber were using AI from day 1, but there was no RoI for long, he added.

According to EY, 66% of India’s top 50 unicorns are already using AI. But only 15-20% of proof of concept AI projects (more like trials) by domestic enterprises have rolled out into production. However, among Global Capability Centres (GCCs), the back offices of global companies in India, the shift from PoC to roll out is around 40%. According to IT body Nasscom, there are around 1,600 GCCs in India and their numbers are growing.

About a third of the use cases in India are for intelligent assistants and chatbots. Another 25% relate to marketing automation enabled by text generation and other capabilities like test-to-images or text-to-videos. Document intelligence is emerging as a key opportunity with around one-fifth of the use cases focusing on document summarization, enterprise knowledge management and search, according to EY.

Tata Steel has partnered with an AI tech platform to use AI for green steel by reducing emissions. Indigo has introduced the AI chatbot 6Eskai to assist travellers. Ecommerce major Flipkart’s knowledge assistant Flippi uses GenAI and LLMs to offer customized recommendations. Reliance Industries and Tata Group inked a strategic pact with Nvidia in September last year to develop India-focused AI powered supercomputers, cloud (for AI use cases) and GenAI applications. The government of India has also made a provision of 10,000 crore to procure computing power for AI projects.

About a third of the use cases in India are for intelligent assistants and chatbots. Another 25% relate to marketing automation enabled by text generation and other capabilities.

Rao of Speciale Invest believes that in India, in sectors such as manufacturing, there may not be a blanket use of AI as it competes with relatively low labour costs. AI will be more cost effective in software development if it takes over some coding tasks, and decreases the need for additional manpower.

“There are productivity improvements,” said Mahesh Makhija, partner and technology consulting leader, EY India. “But with errors, hallucinations (when an AI model generates misleading or incorrect results), and the risk of data thefts, securitycompanies are cautious about using AI.”

But Makhija is bullish on AI’s long-term prospects. “Things will improve. The nature of work will change, like Excel sheets and PPTs decades back, collapsed business planning times from weeks to days. Further improvements will come with AI,” he said.

The human element

Users often find the experience of interacting with chatbots frustrating and want a human to solve their problems.

View Full Image

Users often find the experience of interacting with chatbots frustrating and want a human to solve their problems. (istockphoto)

An oft-cited example of AI success is Swedish fintech company Klarna. In 2023, Klarna partnered with OpenAI to develop a virtual assistant. This March, the fintech claimed its virtual agent helped shrink its query resolution time from 11 minutes to just two. The assistant does the work of 700 humans and Klarna expects to save $40 million this year.

Virtual assistants and chatbots are increasingly being used across enterprises to reduce the load (and save costs) on human contact centres and also improve what they can do (though this is mostly restricted to answering FAQs). But users often find the experience frustrating and want a human to solve their problems.

In the US, a Gartner survey of 5,728 customers, conducted in December 2023, underlined that people remain concerned about the use of AI in the customer service function. Of those surveyed, 64% said they would prefer that companies didn’t use AI in customer service. In addition, 53% of the customers surveyed stated that they would consider switching to a competitor if they found a company was going to use AI for customer service. The top concern? It will get more difficult to reach a human agent. Other concerns include AI displacing jobs and AI providing wrong answers.

“Once customers exhaust self-service options, they’re ready to reach out to a person. Many customers fear that GenAI will simply become another obstacle between them and an agent,” Keith McIntosh, senior principal, research, Gartner customer service and support practice, said in a media release earlier this month.

For AI to take off, its proponents will have to address high costs, build killer apps, and generate correct, error-free output for institutions and people. If this disruptive force is to become as ubiquitous as the internet is today, it has to show trustworthy results. Else it runs the risk of a further erosion in value as stakeholders grow impatient.

Source link

Google AI narrowly misses Gold in International Mathematics Competition: Report

In a stunning display of mathematical prowess, Google’s AI systems, AlphaProof and AlphaGeometry 2, have achieved silver medal-level performance at the prestigious International Mathematical Olympiad (via India Today). 

AlphaProof, a groundbreaking AI system introduced by Google, excels in formal mathematical reasoning, reported the publication. Utilizing a blend of language models and the AlphaZero reinforcement learning algorithm—renowned for mastering chess and Go—AlphaProof trains itself to tackle complex math problems using Lean, a formal language for mathematics. Demonstrating its capabilities, AlphaProof successfully solved two challenging algebra problems and one number theory problem during the IMO, including the competition’s most difficult problem, a feat achieved by only five human contestants.

Reportedly, the second AI system, AlphaGeometry 2, is a notable advancement over Google’s earlier geometry-solving AI. Using a neuro-symbolic hybrid method, it integrates an advanced language model with a robust symbolic engine.

This enhancement enabled AlphaGeometry 2 to solve intricate geometry problems more efficiently. During the IMO, AlphaGeometry 2 impressively solved Problem 4 in just 19 seconds, which involved complex geometric constructions and a deep understanding of angles, ratios, and distances. Trained on a vast dataset encompassing 25 years of historical IMO geometry problems, AlphaGeometry 2 boasts an impressive 83 per cent success rate in solving these challenges.

Google’s AI systems achieved a score of 28 out of 42 points at the IMO, falling just one point short of a gold medal. Renowned mathematicians, such as Fields Medal recipient Prof Sir Timothy Gowers and Dr. Joseph Myers, Chair of the IMO 2024 Problem Selection Committee, reviewed the AI’s solutions. They concluded that the AI could produce impressive and non-obvious solutions, highlighting a significant milestone in AI’s ability to perform complex mathematical reasoning.

This achievement underscores Google’s progress in advancing AI technology, with the potential to revolutionize various fields by assisting mathematicians in exploring new hypotheses, solving longstanding problems, and automating time-consuming elements of mathematical proofs. 

In the future, Google intends to share additional technical information about AlphaProof and to further investigate various AI methodologies to improve mathematical reasoning, adds the publication. Their goal is to create AI systems that collaborate with human mathematicians, thereby advancing the frontiers of science and technology.

Source link

‘India is uniquely positioned to drive the next generation of AI innovation’: Google DeepMind’s Ajjarapu

In an interview on the sidelines of the Google I/O Connect held in Bengaluru on Wednesday, Ajjarapu reasoned that with its largest mobile-first population, micro-payment and digital payment models, a booming startup and developer ecosystem, and diverse language landscape, “India is uniquely positioned to drive the next generation of AI innovation.”

In India, Google works with the Ministry of Electronics and Information Technology’s Startup Hub to train 10,000 startups in AI, expanding access to its artificial intelligence (AI) models like Gemini and Gemma (family of open models styled on Gemini tech), and introducing new language tools from Google DeepMind India, according to Ajjarapu.

It supports “eligible AI startups” with up to $350,000 in Google Cloud credits “to invest in the cloud infrastructure and computational power essential for AI development and deployment.”

Karya, an AI data startup that empowers low-income communities, is “using Gemini (also Microsoft products) to design a no-code chatbot,” while “Cropin (in which Google is an investor) is using Gemini to power its new real-time generative AI, agri-intelligent platform.”

Manu Chopra, co-founder and CEO of Karya, said he uses Gemini “to take Karya Platform global and enable low-income communities everywhere to build truly ethical and inclusive AI.”

Gemini has helped Cropin “build a more sustainable, food-secure future for the planet,” according to Krishna Kumar, the startup’s co-founder and CEO.

Robotic startup Miko.ai “is using Google LLM as a part of its quality control mechanisms,” says Ajjarapu.

According to Sneh Vaswani, co-founder and CEO of Miko.ai, Gemini is the “key” to helping it “provide safe, reliable, and culturally appropriate interactions for children worldwide.”

Helping farmers

With an eye on harnessing the power of AI for social good, Google plans to soon launch the Agricultural Landscape Understanding (ALU) Research API, an application programming interface to help farmers leverage AI and remote sensing to map farm fields across India, according to Ajjarapu.

The solution is built on Google Cloud and on partnerships with the Anthro Krishi team and India’s digital AgriStack. It is piloted by Ninjacart, Skymet, Team-Up, IIT Bombay, and the Government of India, he pointed out.

“This is the first such model for India that will show you all field boundaries based on usage patterns, and show you other things like sources of water,” he added.

On local language datasets, Ajjarapu underscored that Project Vaani, in collaboration with the Indian Institute of Science (IISc), has completed Phase 1 — over 14,000 hours of speech data across 58 languages from 80,000 speakers in 80 districts. The project plans to expand its coverage to all states of India, totaling 160 districts, in phase two.

Project Vaani introduced IndicGenBench, a benchmarking tool tailored for Indian languages, which covers 29 languages. Additionally, Project Vaani is open-sourcing its CALM (Composition of Language Models) framework for developers to integrate specialised language models with Gemma models. For example, integrating a Kannada specialist model into an English coding assistant may help in offering coding assistance in Kannada as well.

Google, which has Gemini Nano tailored for mobile devices, has introduced the Matformer framework, developed by the Google DeepMind team in India. According to Manish Gupta, director, Google, it allows developers to mix different sizes of Gemini models within a single platform.

This approach optimises performance and resource efficiency, ensuring smoother, faster, and more accurate AI experiences directly on user devices.

India-born Ajjarapu was part of Google’s corporate development team that handled mergers and acquisitions when Google’s parent Alphabet acquired UK-based AI company DeepMind in 2014. As a result, he got the opportunity to conduct the due diligence and lead the integration of DeepMind with Google. 

Research, products and services

Ajjarapu, though, was not a researcher, and was unsure of meaningfully contributing to DeepMind’s mission, which “at that time, was to solve intelligence.” This prompted him to quit Google in 2017 after 11 years, and launch Lfyt’s self-driving division. Two years later, Ajjarupu rejoined Google DeepMind as senior director, engineering and product.

Last year, Alphabet merged the Brain team from Google Research and DeepMind into a single unit called Google DeepMind, and made Demis Hassabis its CEO. Jeff Dean, who reports to Sundar Pichai, CEO of Google and Alphabet, serves as chief scientist to both Google Research and Google DeepMind.

While the latter unit focuses on research to power the next generation of products and services, Google Research deals with fundamental advances in computer science across areas such as algorithms and theory, privacy and security, quantum computing, health, climate and sustainability and responsible AI.

Has this merger led to a more product-focused approach at the cost of research, as critics point out? Ajjarapu counters that Google was still training its Gemini foundation models when the units were merged in April 2023, after which it launched the Gemini models in December, followed by Gemini 1.5 Pro, “which has technical breakthroughs like a long context window (2 million tokens that covers about 1 hour of video, or 11 hours of audio, or 30,000 lines of code).”

A context window is the amount of words, known as tokens, a language model can take as input when generating responses.

“Today, more than 1.5 million developers globally use Gemini models across our tools. The fastest way to build with Gemini is through Google AI Studio, and India has one of the largest developer bases on Google AI Studio,” he notes.

Google Brain and DeepMind, according to Ajjarapu, were also collaborating “for many years before the merger”.

“We believe we built an AI super unit at Google DeepMind. We now have a foundational research unit, which Manish is a part of. Our team is part of that foundation research unit. We also have a GenAI research unit, focused on pushing generative models regardless of the technique — be it large language models (LLMs) or diffusion models that gradually add noise (disturbances) to data (like an image) and then learn to reverse this process to generate new data,” said Ajjarapu, who is part of the product unit and whose job is to “take the research and put it in Google products.”

Google also has a science team, which is primarily responsible for things like protein folding and discovering new materials. Protein folding refers to the problem of determining the structure if a protein from its sequence of amino acids alone.

“There are many paradigms to go after AI development, and we feel like we’re pretty well covered in all of them,” he says. “We’re now fully in our Gemini era, bringing the power of multimodality to everyone.”

Match, incubate and launch

And how does Google decide which research products and product ideas to prioritise and invest in? According to Ajjurupa, the company uses an approach called “match, incubate, and launch.”

Is there a problem that’s ready to be solved with a technology that’s readily available? That’s the matching part. For instance, for graph neural nets, the map is a graph. So there is a match. However, even if there’s a match, performance is not guaranteed when it comes to generative AI. 

“You have to iterate it,” he says. 

The next step involves de-risking an existing technology or research breakthrough for the real world since not all of them are ready to be made into products. This phase is called incubation. The final stage is the launch.

“That’s the methodical approach that we follow. But given the changing nature of the world, and changing priorities, we try to be nimble,” says Ajjarupu.

Gupta, on his part, asks his research team to identify research problems that will have “some kind of a transformative impact on the world, which makes it worthy of being pursued, even if the problem is very hard or the chances of failure are very high.”

And how is Google DeepMind addressing ethical concerns around AI, especially biases and privacy? According to Gupta, the company has developed a framework to evaluate the societal impact of technology, created red teaming techniques, data sets and benchmarks, and shared them with the research community.

He adds that his team contributed the SeeGULL dataset (benchmark to detect and mitigate social stereotypes about groups of people in language models) to uncover biases in language models based on aspects such as nationality and religion.

“We work to understand and mitigate these biases and aim for cultural inclusivity too in our models,” says Gupta. 

Ajjarapu adds that the company’s focus is on “responsible governance, responsible research, and responsible impact.” 

He cited the example of the Google SynthID — an embedded watermark and metadata labelling solution that flags photos (deepfakes) generated using Google’s text-to-image generator, Imagen.

 

Source link