The 12 startup business plans covered a variety of verticals, including AI safety, AI for health, AI for social good, and more.
The inaugural Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Entrepreneurship Courses concluded with 12 AI-based startup business plans presented at an on-campus event in Masdar City.
22 students gained the entrepreneurial skills, tools, and networks needed to commercialise their AI solutions in the UAE, and three startups were fast-tracked for financial grants from MBZUAI.
Jointly launched by MBZUAI’s Incubation and Entrepreneurship Centre (MIEC) and startAD, the Abu Dhabi-based startup accelerator powered by Tamkeen and anchored at NYU Abu Dhabi, the programme aims to boost the AI startup ecosystem in Abu Dhabi.
The top three AI-assisted technologies and applications are Audiomatic, which provides automatic and emotionally intelligent audio production for video content, including customised scores, sound effects, and narrations; Limb, an application providing accessible physiotherapy information such as exercise correction and pain management features; and Momzo, a complete AI assistant for women’s maternity to motherhood powered by generative AI.
“These are the first AI-focused entrepreneurship courses at the university and in the UAE and come on the eve of an expected AI startup boom led by generative AI,” MBZUAI’s Vice President of Public Affairs and Alumni Relations Sultan Al Hajji said.
He added, “The entrepreneurship courses actively encourage students to take advantage of the favourable entrepreneurial environment in Abu Dhabi and ignite the potential to transform their research and engineering know-how into a business. The startup pitches highlight specific-industry and application-use cases and have the potential to make a significant impact on society with their bold, sophisticated, and innovative concepts.”
Managing Director of startAD Ramesh Jagannathan said, “The MBZUAI IEC programme immersed AI innovators in the exciting world of innovation and entrepreneurship, where they learned to apply business literacy heuristics to their ideas. According to an Accenture Report, AI will add $182 billion in annual gross value to the UAE’s economy by 2035. These startup projects demonstrate high potential and are poised to strengthen the UAE’s knowledge economy.”
The 22 graduates represent more than ten nationalities, with 41% being women. All participants graduated from the intensive five-week entrepreneurship courses, which included eight workshops and three community engagement events covering topics such as idea generation, market discovery, prototyping, and pitching.
The 12 startup business plans covered a variety of verticals, including AI safety, AI for health, AI for social good, and more.
The top three pitches were named by a panel of expert judges, including Jean-Luc Scherer, business incubation expert and advisor at Sandooq Al Watan; Mariam Al Badr, director of outreach at Khalifa Fund; Dr. Ramzi Ben Ouaghrem, director of research development and engagement at MBZUAI; Michael Huang, acting director of strategy and IEQA at MBZUAI; and Selim Tira, investment representative at Shorooq Partners.
The IMF believes that higher-income and younger workers might see disproportionate wage increases as a result of AI adoption. Lower-income and older workers might lag…reports Asian Lite News
Artificial intelligence (AI) will affect nearly 40 per cent of all jobs around the world, according to a new analysis by the International Monetary Fund (IMF).
“In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” IMF Managing Director Kristalina Georgieva said in a blogpost.
According to the IMF, about 60 per cent of jobs may be impacted by AI in advanced economies. Around half the exposed jobs may benefit from AI integration, enhancing productivity.
For the other half, AI applications may perform key tasks currently performed by humans, potentially lowering labour demand, and resulting in lower wages and reduced hiring. In the worst-case scenario, some of these jobs might disappear.
In emerging markets and low-income countries, however, AI exposure is expected to be 40 per cent and 26 per cent, respectively.
“These findings suggest emerging market and developing economies face fewer immediate disruptions from AI. At the same time, many of these countries don’t have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality among nations,” said the IMF.
AI could also affect income and wealth inequality within countries.
The IMF believes that higher-income and younger workers might see disproportionate wage increases as a result of AI adoption. Lower-income and older workers might lag.
“We may see polarisation within income brackets, with workers who can harness AI seeing an increase in their productivity and wages — and those who cannot falling behind,” noted Georgieva.
She further mentioned that “it is crucial for countries to establish comprehensive social safety nets and offer retraining programmes for vulnerable workers”.
“In doing so, we can make the AI transition more inclusive, protecting livelihoods and curbing inequality,” Georgieva added.
The march to the Age of Intelligence is being fast-paced by the advances that Information Technology has made towards applications of Artificial Intelligence in the spheres of innovation, business and security, writes D.C. Pathak
It is not too distant in the past that the world witnessed a great transformation resulting from a combination of epoch-making developments – all occurring around the same time at the beginning of the 1990s. These literally created a ‘new world order’ impacting not only the economy and business but national security and international cooperation as well.
An unprecedented level of ‘globalisation’ was reached in terms of both economic expansion and a universally shared threat to security when the Cold War ended due to the dismemberment of the USSR and the demise of International Communism, the advent of Information Technology revolution created border fewer markets and faith-based new global terror rooted in Islamic ‘radicalisation’ represented by Taliban, Al Qaeda and ISIS registered a rising graph.
The upswing of Terrorism can be traced to the turbulent post-Soviet Afghanistan when Pakistan sent in the Taliban to control that country and facilitated the installation of the Kabul Emirate of Taliban in 1996.
Since Islamic radicals considered the US-led West as their first enemy – this was rooted in historical legacy, the Emirate ran into problems with the US making the latter work for its ouster. This laid the turf for 9/11 that in turn resulted in the US-sponsored ‘war on terror’ in Afghanistan and Iraq.
The ‘war on terror’ was utilised by Islamic radical forces to spread their hold in the Muslim world somewhat at the cost of the allies of the US like Saudi Arabia, UAE and Bahrain.
The overriding impact of the new world order was in giving a boost to economic globalisation where the agenda was largely set by the US but the strategy of countering the terror of Islamic radicals also became equally important for the US.
India and the US had to be together for their mutual economic advancement but they also had to join hands, as the two largest democracies, in leading the democratic world against the peril of the faith-based terrorism that was sustained by the fundamentalist notions of supremacism and exclusivism of Islam as a faith.
The driving force behind economic globalisation – which became the prime characteristic of the post-Cold War world, was the arrival of Information Technology (IT) that enabled instant communication across geographical boundaries to set new norms of entrepreneurship and competition – permitting a ‘smart’ player to take on its much larger and more resourceful rivals from any part of the globe.
‘Smartness’ lay in producing more per unit of resource that IT helped in and businesses were compelled to study both market trends as well as use of technology to stay in competition. Intelligence by definition is the information that enables you to see what lies ahead and since this could be gleaned out of an analysis of the enormous amount of data that was being put in the public domain regularly, corporates willingly invested in a set-up that would produce Business Intelligence for them.
Intelligence is a word normally used in the context of national security but the applied version of it is now not only a part of the business world but ‘being well informed’- which was the mandate of the Age of Information – has also become a means of running personal and family life on a note of success. Ignorance cannot be defended any more and an awareness of what the socio-economic scene and even the crime situation was like, would be a factor in keeping one safe and secure.
Terrorism, Narcotics and Illicit arms have brought issues of national security closer to the citizens because they came into operation where people lived and that is another reason why citizens should keep themselves broadly informed of the social security environment around them.
It is the duty of the State to keep the citizens safe and there is a certain expectation from the people that they would contribute to this mandate, too.
Fundamental Duties defined in the Constitution have acquired a newfound importance in the context of India’s internal security.
It can be said that just as the world transited from the Industrial Age to the Age of Information in the early Nineties, it is now shifting to the Age of Intelligence because for nations, organisations and even individuals, perceptions of ‘what lies ahead’ are becoming even more important in the light of new geopolitical developments, the economic situation in the world and at home and the changing security scenario at the global and national levels.
The Age of Information created the ‘knowledge economy’, gave a new dimension to the process of making a decision and underscored the importance of Intelligence which by definition is information of special value since it gave a peep into what opportunity or risk was there on the horizon.
Knowledge is analysed information, Intelligence is futuristic information and decision-making requires information that bridges the gap between ‘guesswork’ and the ‘reality’. A global mindset is an essential trait required for the successful handling of business today – it has always been needed in the sphere of national security- because a rival or adversary could be operating from anywhere across the geographical frontiers.
Finally, in the Age of Information, competent analysis of facts garnered from the public domain has acquired newfound importance because the enemy or the rival leaves enough footprints in the social or cyber media even while using the latter covertly. This in fact is an exercise of Intelligence generation as the analysts can possibly read the intention of the opponent for the future.
The march to the Age of Intelligence is being fast-paced by the advances that Information Technology has made towards applications of Artificial Intelligence in the spheres of innovation, business and security.
Within the input-output principle that governs all transactions in the digital world, AI has emerged as the enabling instrument for the instant processing of a billion data to produce findings that would be humanly impossible to reach. What is of concern about AI applications, however, is that apart from data processing, they enable simulation of voice, photo identity and even personal behaviour including choices exercised by the individual, to generate fake versions that could be used for ‘misinformation’, fraud and political purposes like image bashing and influencing the electoral process.
AI has produced the phenomena of ‘Machine learning’, ‘Deep learning’ and ‘Natural language process’ but it has to be remembered that the so-called ‘Computer vision’ is still rooted in ‘pattern’ reading and use of ‘key’ words. ‘Intelligence’ produced through this route is confined to a limited ‘predictability’ of human conduct based on analysis of personal data.
The versatility of thought that the human mind can command while examining a situation, the ‘imagination’ that it can invoke in seeing what lay beyond the data in front and the quality of human ‘empathy’ it can use in decision-making is what would distinguish Human Intelligence from Artificial Intelligence. This is not to underplay the epoch-making promise of the human good that AI as an ultimate advancement of IT, has offered.
The fact is that AI is a further milestone in the world’s progress from the ‘Age of Information’ to the ‘Age of Intelligence’. There is little doubt that the legitimate growth of AI is putting health care, education, innovation, productivity and Human Resource development on an entirely new pedestal and helping the larger good of the world.
There have been some concerns about possible job losses, particularly in the white-collar segment but what is on the anvil is that businesses are going to get more efficient, diversified and stable through AI applications without necessarily reducing their manpower.
The call for global AI regulations is already emerging as a major requirement and this matter has figured prominently at G20 and other international platforms like the APEC Summit because of the fear of misuse of weapon automatisation and the danger of malcontents and terrorists using technology to plan and execute operations including cyber attacks.
The use of AI by Israel to identify and locate Hamas targets in Gaza is an illustration of its application in defence. India is rightly at the forefront of efforts to put AI applications for the larger good of humanity and prevent their destructive fallout at the same time. It has just hosted an international conference in Delhi to deliberate on various aspects of AI.
(The writer is a former Director of the Intelligence Bureau, India’s domestic intelligence agency. Views are personal)
The EU’s AI Act is set to be the world’s first comprehensive set of rules to govern AI and user harm associated with it.
The European Parliament on Saturday said its members have reached a landmark “provisional agreement” on the proposed Artificial Intelligence Act (AI Act).
The EU’s AI Act is set to be the world’s first comprehensive set of rules to govern AI and user harm associated with it.
“This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field,” the European Parliament said in a statement.
The rules establish obligations for AI based on its potential risks and level of impact. European Union President Ursula von der Leyen said that the political agreement is a “global first”.
“The AI Act is a global first. A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered,” she posted on X.
Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit biometric categorisation systems that use sensitive characteristics (political, religious, philosophical beliefs, sexual orientation, race).
The agreement also prohibits untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and educational institutions and social scoring based on social behaviour or personal characteristics.
It also curbs AI systems that manipulate human behaviour to circumvent their free will and AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation). For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. Members successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors.
“AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights,” said the Parliament.
Nadella said that the pace of innovation that they have driven has been remarkable, especially during a time of so much “continued hardship and uncertainty in the world”….reports Asian Lite News
As Sam Altman returned to OpenAI after the five-day intense drama, Microsoft Chairman and CEO Satya Nadella has said that technology, including Al, is only a tool.
In an internal memo to employees ahead of Thanksgiving holiday, Nadella said that the pace of innovation that they have driven has been remarkable, especially during a time of so much “continued hardship and uncertainty in the world”.
“But technology, including Al, is only a tool. It’s a means, not an end. And, ultimately, our end is our mission to empower people and organisations all over the planet — one individual, one community, one country at a time,” he told employees.
“At the end of the day, the greatest privilege of my job is working with people who are driven by mission. There is no better example of this than these past 5 days, when I saw people across the company remaining focused on our mission and serving our customers and partners, stepping up to help in every way possible,” Nadella added.
Microsoft Chief Technology Office (CTO) and EVP of AI, Kevin Scott, also addressed employees about the OpenAI turmoil, reports The Verge.
“The events of the past few days have been uncertain for our colleagues at OpenAl, and of intense interest to many others. Throughout, nothing has changed or wavered about our resolve and focus to deliver the world’s best Al technology platforms and products to our customers and partners,” Scott said in a separate memo to employees.
“We will continue to support our colleagues at OpenAl and the phenomenal work they’ve been doing alongside us in service of that mission. As we have for these past 4+ years, we look forward to continuing our work with Sam and his team,” he added.
Scott said that despite the potential of the past few days to distract us, both Microsoft and OpenAl scientists and engineers have been working with undiminished urgency.
On Wednesday, OpenAI announced that Altman and president and co-founder Greg Brockman are returning to the company with a new board in place.
Altman’s Exit Sparks Intrigue at OpenAI
A secret AI project named ‘Q’ (pronounced Q-Star) at OpenAI that could threaten humanity may have been the reason behind Sam Altman’s ouster as CEO from the ChatGPT-developing company.
According to reports, several staff researchers sent the OpenAI board a letter warning that a powerful AI breakthrough could threaten humanity.
The letter and AI algorithm was a catalyst that caused the board to oust Altman, according to a Reuters report, citing sources.
The previously unknown letter was one of the factors “among a longer list of grievances by the board that led to Altman’s firing”.
The researchers who wrote the letter did not comment, neither did OpenAI.
The ChatGPT maker made progress on ‘Q-Star’ project which could be a breakthrough in the search for superintelligence, also known as artificial general intelligence (AGI).
According to reports, OpenAI’s senior executive Mira Murati told employees the letter “precipitated the board’s actions” to fire Altman last week.
However, an OpenAI spokesperson said that “Murati told employees what the media reports were about but she did not comment on the accuracy of the information”.
A person familiar with the matter told The Verge that the board never received a letter about such a breakthrough. Sam Altman on Wednesday said he is returning to the ChatGPT developing company with a new board and Microsoft CEO Satya Nadella’s support.
OpenAI president and co-founder Greg Brockman also shared a picture with his team on X after he and Altman returned to OpenAI.
Fraud using artificial intelligence is uncommon, but examples of “successful” cases are already known….reports Asian Lite News
The Beatles have once again delighted millions of fans around the world by releasing a new song, all possible thanks to artificial intelligence (AI), combining parts of an old recording while also improved its audio quality. While there is joy at the band’s masterpiece, there is also a darker side of using AI to create deepfake voices and images.
Thankfully, such deepfakes – and the tools used to make them – are for now, not well developed or widespread, nevertheless, their potential for use in fraud schemes is extremely high, and the technology is not standing still.
What are voice deepfakes capable of?
Open AI recently demonstrated an Audio API model that can generate human speech and voice input text. So far, only this Open AI software is the closest to real human speech.
In the future, such models can also become a new tool in the hands of attackers. The Audio API can reproduce the specified text by voice, while users can choose which of the suggested voice options the text will be pronounced with. The Open AI model, in its existing form, cannot be used to create deepfake voices, but is indicative of the rapid development of voice generation technologies.
Today, practically no devices exist that is capable of producing a high-quality deepfake voice, indistinguishable from real human speech. However, in the last few months, more tools are being released to generate a human voice. Previously, users needed basic programming skills, but now it is becoming easier to work with them. In the near future, we can expect to see models that will combine both simplicity of use and quality of results.
Fraud using artificial intelligence is uncommon, but examples of “successful” cases are already known. In mid-October 2023, American venture capitalist Tim Draper warned his Twitter followers that scammers can use his voice in fraud schemes. Tim shared that the requests for money being made by his voice are the result of artificial intelligence, which is obviously getting smarter.
How to protect yourself?
So far, society may not perceive voice deepfakes as a possible cyber threat. There are very few cases where they are used with malicious intentions, so protection technologies are slow to appear.
For now, the best way to protect yourself is to listen carefully to what your caller says to you on the telephone. If the recording is of poor quality, has noises, and the voice sounds robotic, this is enough not to trust the information you hear.
Another good way to test your companion’s “humanity” is to ask out-of-the-box questions. For example, if the caller turns out to be a voice model, a question about its favorite color will leave its stumped, as it is not what a victim of fraud usually asks. Even if the attacker manually dials and plays back the answer at this point, the time delay in the response will make it clear that you are being tricked.
One more safe option is also to install a reliable and comprehensive security solution. While they cannot 100 percent detect deepfake voices, they can help users avoid suspicious websites, payments, and malware downloads, by protecting browsers and checking all files on the computer.
“The main advice at the moment is not to exaggerate the threat or try to recognize voice deepfakes where they don’t exist. For now, the available technology is unlikely to be powerful enough to create a voice human would not be able to recognize as artificial. Nevertheless, you need to be aware of possible threats and be prepared for advanced deepfake fraud becoming a new reality in the near future,” comments Dmitry Anikin, Senior Data Scientist at Kaspersky.
Declaring the AI Safety Summit a “historic achievement”, Sunak says the discussions will help tip the balance in favour of humanity and secure AI’s benefits…reports Asian Lite News
Prime Minister Rishi Sunak on Thursday concluded the two-day AI Safety Summit with what he characterised as a “landmark” agreement with governments and artificial intelligence companies to work together on testing new AI models before they are released into the public domain.
Addressing a press conference at Bletchley Park in Buckinghamshire to summarise the achievements of the first summit of its kind hosted by the UK, Sunak said the agreement was reached along with US Vice-President Kamala Harris who has committed to setting up an American AI Safety Institute to work alongside its UK counterpart.
It builds on the Bletchley Declaration clinched between 28 countries, including India, on the shared responsibility to address the risks associated with AI agreed on day one of the two-day summit on Wednesday.
“Like-minded governments and AI companies have today reached a landmark agreement. We will work together on testing the safety of new AI models before they are released,” Sunak told reporters.
“This partnership is based around a series of principles which set out the responsibilities we share and it’s made possible by the decision I have taken, along with Vice-President Kamala Harris, for the British and American governments to establish world-leading AI Safety Institutes,” he said.
Declaring the AI Safety Summit a “historic achievement” by the UK to take the lead on this generation’s most “transformative” change, Sunak said the discussions will help tip the balance in favour of humanity and secure AI’s benefits for the long-term.
He also revealed that South Korea and France have offered to host the two following summits in future. As part of the outcomes, it was announced that Canadian computer scientist Yoshua Bengio, dubbed the “godfather of AI”, will chair the production of an inaugural report into the technology.
“Yesterday we agreed and published the first-ever international statement about the nature of all of those risks. It was signed by every single nation represented by this summit, covering all continents across the globe and including the United States and China. Some said we shouldn’t even invite China, others said that we could never get an agreement with them. Both were wrong,” said Sunak, with reference to the Bletchley Declaration.
It has also been confirmed that the Frontier AI Taskforce set up earlier by the UK government will now evolve to become the AI Safety Institute, with Ian Hogarth continuing as its Chair. The External Advisory Board for the taskforce, made up of industry heavyweights from national security to computer science, will continue as advisors of the new global hub.
‘AI is the most destructive force in history’
Deadly robots that can climb trees, AI friends and a work-less future were among the topics as Rishi Sunak sat down with Elon Musk.
The prime minister held a highly unusual “in conversation” event with the billionaire X and Tesla owner at the end of this week’s summit on artificial intelligence. Throughout the wide-ranging and chummy discussion, Musk held court as the prime minister asked most of the questions.
The pair talked about how London was a leading hub for the AI industry and how the technology could transform learning. But the chat took some darker turns too, with Sunak recognising the “anxiety” people have about jobs being replaced, and the pair agreeing on the need for a “referee” to keep an eye on the super-computers of the future.
Tech investor and inventor Musk has put money into AI firms and has employed the technology in his driverless Tesla cars – but he’s also on the record about his fears it could threaten society and human existence itself.
“There is a safety concern, especially with humanoid robots – at least a car can’t chase you into a building or up a tree,” he told the audience. Sunak – who is keen to see investment in the UK’s growing tech industry – replied: “You’re not selling this.”
It’s not every day you see the prime minister of a country interviewing a businessman like this, but Sunak seemed happy to play host to his famous guest.
And if he seemed like he was enjoying it, it should be no surprise – he previously lived in California, home to Silicon Valley, and his love of all things tech is well-documented. In a hall that size, Musk was difficult to hear and mumbled through his elaborate musings about the future, but refrained from any off-the-cuff remarks that might have caused Downing Street embarrassment.
The pair discussed the potential benefits of AI, with Musk saying: “One of my sons has trouble making friends and an AI friend would be great for him.” There was also agreement on the possibilities AI presents for young people’s learning, with Musk saying it could be “the best and most patient tutor”.
But there was a stark warning on the potentially ruinous impact it could have on traditional jobs.
“We are seeing the most destructive force in history here,” Musk said, before speculating: “There will come a point where no job is needed – you can have a job if you want one for personal satisfaction but AI will do everything. “It’s both good and bad – one of the challenges in the future will be how do we find meaning in life.”
Musk was one of the star guests at this week’s summit – but it briefly looked like the event with Sunak might be a little overshadowed.
Key leaders, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and UN Secretary-General Antonio Guterres, are expected to attend the two-day AI Safety Summit in the UK….reports Asian Lite News
Over 100 world leaders, tech honchos, academics and researchers will gather next week in the UK to deliberate upon the risks associated with artificial intelligence (AI) and how to tackle those under the leadership of Prime Minister Rishi Sunak.
In the US, President Joe Biden is reportedly going to deploy numerous federal agencies to monitor the risks of AI next week and develop new uses for the technology via an executive order.
The two-day ‘AI Safety Summit in the UK is likely to see US Vice President Kamala Harris; European Commission president Ursula von der Leyen; United Nations’ Secretary General Antonio Guterres and many other leaders.
According to the BBC, their purpose is to take part in discussions about “how best to maximise the benefits of this powerful technology while minimising the risks.”
In a recent report, the UK government listed some worrisome potential threats of AI, including bio-terrorism, cyber attacks and deepfake images of child sexual abuse. Apparently, Sunak has a plan, and it’s an ambitious one to tackle AI risks.
“He wants to position the UK as the global leader for AI safety,” the rpeort mentioned.
Meanwhile, Biden via an executive order is expected to “pave the way for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more”, reports Politico.
“Biden’s order specifically directs the Federal Trade Commission, for instance, to focus on anti-competitive behaviour and consumer harms in the AI industry – a mission that Chair Lina Khan has already publicly embraced,” the report mentioned.
The US Congress has “scrambled to put legislation together to tackle the risks and potential of AI”. However, the US Senate Majority Leader Chuck Schumer cautioned last week that “no broad AI bill was likely to be introduced until next year”.
The Indian government is also likely to organise the first-ever ‘Global India AI Summit’ on December 10 that will invite global and domestic leaders in AI to deliberate about deploying AI in the same context of digital technologies to transform the lives of its citizens.
The conference is poised to cover topics like next-generation learning and foundational AI models, AI’s applications in healthcare, governance, and next-gen electric vehicles, future AI research trends, AI computing systems, investment opportunities, and nurturing AI talent.
Microsoft said it will soon add access to OpenAI’s DALL-E 3 image generator for users to create images right in a chat…reports Asian Lite News
Microsoft Chairman and CEO Satya Nadella on Thursday doubled down on the role of generative AI in the company’s product portfolio, starting with Windows 11 that will include the new AI-powered Copilot feature, along with launching a powerful new Surface laptop lineup.
With over 150 new features, the new Windows 11 Update will be available starting September 26, bringing the power of Copilot and new AI-powered experiences to apps like Paint, Photos, Clipchamp and more right to your Windows PC.
“Microsoft Copilot will uniquely incorporate the context and intelligence of the web, your work data and what you are doing in the moment on your PC to provide better assistance — with your privacy and security at the forefront,” said Yusuf Mehdi, Corporate Vice President & Consumer Chief Marketing Officer.
“It will be a simple and seamless experience, available in Windows 11, Microsoft 365, and in our web browser with Edge and Bing. It will work as an app or reveal itself when you need it with a right click,” he told the gathering.
Microsoft 365 Copilot will generally be available to commercial customers starting November 1 with a more powerful version of M365 Chat, new capabilities for Copilot in Outlook, Excel, Loop, OneNote, OneDrive and Word.
The company also announced new features in Bing and Edge.
Bing Chat Enterprise will also get a few upgrades including support for multimodal visual search and Image Creator now available in the Microsoft Edge mobile app, the company announced.
Microsoft said it will soon add access to OpenAI’s DALL-E 3 image generator for users to create images right in a chat.
“We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology. With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action,” said Mehdi.
Additionally, the company introduced new Surface laptops. The Surface Laptop Studio 2 with 14.4-inch display starts at $1,999 and runs on Intel’s 13th generation chips.
Turbocharged with the latest Intel Core processors and cutting-edge NVIDIA Studio tools for creators-with up to 2x more graphics performance than MacBook Pro M2 Max, the device brings together the versatility to create and the power to perform, said Microsoft.
The Studio 2 also offers some big new connectivity options: it has two USB-C ports, one USB-A port, a microSD card reader, and the Surface Slim Pen 2.
Microsoft also announced Surface Laptop Go 3 that comes with a 12.4-inch touchscreen and offers up to 15 hours of battery life, along with one USB-A port, one USB-C port, and a headphone jack.
The Surface Laptop Go 3 starts at $799, with availability starting October 3.
Britain holds the rotating presidency of the UN Security Council this month…reports Asian Lite News
The United Nations Security Council is all set to hold its first formal discussion on Artificial Intelligence (AI) in New York today.
Britain holds the rotating presidency of the UN Security Council this month where it wants to encourage a multilateral approach to managing both the immense opportunities and risks that artificial intelligence presents, including its implications for international peace and security. The AI meeting will be chaired by James Cleverly, Secretary of State for Foreign, Commonwealth and Development Affairs, according to the UN.
Like many other Member States, the United Kingdom recognizes that “humanity stands on the precipice of this gigantic technological leap forward”.
In the meeting, the UK calls for international dialogue on its risks and opportunities for international peace and security, ahead of the UK hosting the first-ever global summit on AI later this year, the press statement from the UK government released.
Ranking third globally across several metrics, the UK is a world leader in AI and well-placed to convene these discussions. It also stands to gain from growth in the AI sector, which already contributes an estimated 3.7 billion pounds in gross value added (GVA) to the UK economy and employs over 50,000 people.
Meanwhile, on Monday, Cleverly began his visit to the UN in New York, coinciding with the UK’s presidency of the UNSC for the month of July.
“Cleverly will lead a UN Security Council session on the war in Ukraine, prior to which he is expected to announce further UK action to hold the Russian government to account for its calculated deportation of Ukrainian children. Over 19,000 children have been forcibly relocated to re-education camps in an attempt to erase their cultural and national identity,” the press statement read.
“He will also attend the UN High-Level Political Forum to deliver the UK national statement on sustainable development with Member States, civil society organisations and private sector firms, showing the UK’s leadership in bringing the international community together to promote future global security, stability, and prosperity, which in turn will benefit the UK economy – supporting the Prime Minister’s priority to grow the economy,” it added. (ANI)