Categories
Social Media Tech Lite

GM may bring ChatGPT-like digital assistant for cars

Announced internally, the cuts affected about 500 positions across the company’s various functions, reports CNBC, citing sources…reports Asian Lite News

Automaker General Motors (GM) is reportedly working on a virtual personal assistant based on the same machine learning models that power ChatGPT.

According to Semafor, citing sources, the voice-activated chatbot will be powered by Microsoft’s Azure cloud service, which owns the OpenAI tech that powers ChatGPT.

In addition, Scott Miller, GM’s vice president of software-defined vehicle and operating system, confirmed that the company is developing an artificial intelligence assistant in order to go beyond current voice commands.

For instance, if a driver gets a flat tyre, they can ask the car to show them how to change it, which may result in the car playing an instructional video on an internal display.

Moreover, the report mentioned that the version of AI assistant in GM cars will behave differently than ChatGPT or Bing Chat because the automaker is working on adding another, more car-specific layer on top of the OpenAI models known for answering any question, often with unpredictable results.

Meanwhile, General Motors has laid off hundreds of workers from the company as it follows other major companies, including competitors, in downsizing headcounts to preserve cash and boost profits.

Announced internally, the cuts affected about 500 positions across the company’s various functions, reports CNBC, citing sources.

ALSO READ-AI chatbot ChatGPT unable to clear UPSC exams

Categories
Business Health Technology

New AI tech to pick donor organs for transplant

The OrQA assessment will majorly look for damage, pre-existing conditions and how well blood has been flushed out of the organ….reports Asian Lite News

British researchers are developing a novel technology based on Artificial Intelligence (AI) that will pick donor organs for transplant on a much better level than humans, the media reported.

The new technology known as OrQA — Organ Quality Assessment — uses AI and its “memory” of tens of thousands of images of donor organs to identify the ones that offer the best chance of success during transplant, the Evening Standard reported.

Currently, doctors physically examine the organs that have the best chance of a success during transplant.

The OrQA assessment will majorly look for damage, pre-existing conditions and how well blood has been flushed out of the organ.

The technology, once rolled out, can result in up to 200 more patients receiving kidney transplants and 100 more receiving liver transplants every year in the UK, according to researchers, which include from the University of Oxford.

“Currently, when an organ becomes available, it is assessed by a surgical team by sight, which means, occasionally, organs will be deemed not suitable for transplant,” Prof. Hassan Ugail, director of the centre for visual computing at the University of Bradford, was quoted as saying.

“We are developing a deep machine learning algorithm which will be trained using thousands of images of human organs to assess images of donor organs more effectively than what the human eye can see,” he said.

“This will ultimately mean a surgeon could take a photo of the donated organ, upload it to OrQA and get an immediate answer as to how best to use the donated organ,” Ugail said.

The project is backed by ministers, NHS Blood and Transplant (NHSBT) and the National Institute for Health and Care Research (NIHR) Blood and Transplant Research Unit. Researchers have also secured more than 1 million pounds in funding from NIHR, the report said.

“This is a really important step for professionals and patients to make sure people can get the right transplant as soon as possible,” Colin Wilson, transplant surgeon at Newcastle upon Tyne Hospitals NHS Foundation Trust, was quoted as saying.

“The software we have developed ‘scores’ the quality of the organ and aims to support surgeons to assess if the organ is healthy enough to be transplanted,” he added.

ALSO READ: Layoffs hit Nike

Categories
Business Tech Lite Technology

Apple nod to ChatGPT-driven app amid concerns

Apple approved the app called aBlueMail’ following assurances from its developer that it has content moderation tools…reports Asian Lite News

Apple has reportedly approved an AI chatbot-driven app after content moderation assurances from its developer, as concerns rise about ChatGPT going bonkers and even generating inappropriate content for some users.

According to a report in The Wall Street Journal, Apple approved the app called aBlueMail’ following assurances from its developer that it has content moderation tools.

Apple scrutinised whether a feature in the software that uses AI-powered language tools “could generate inappropriate content for children”.

According to Ben Volach, co-founder of the app maker, Blix Inc., he told Apple that “its update includes content moderation”.

He suggested that “the company should make public any new policies about the use of ChatGPT or other similar AI systems in apps”.

The BlueMail app is still available for users aged 4 and older, said the report.

Apple curates and reviews each app before approving those for its App Store.

However, there have been concerns regarding ChatGPT use.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text.

AI chatbot ChatGPT-driven Bing search engine triggered a shockwave recently after it told a reporter with The New York Times that it loved him, confessed its destructive desires and said it “wanted to be alive”, leaving the reporter “deeply unsettled.”

Cyber-criminals are also using ChatGPT to create Telegram bots that can write malware and steal data.

ALSO READ: Layoffs hit Nike

Categories
Tech Lite USA

ChatGPT-driven smart home voice assistant coming soon

Capecelatro explained by giving some examples of how ChatGPT-enabled voice assistant would work…reports Asian Lite News

US-based artificial intelligence company Josh.ai, which is known for developing the voice-controlled home automation system, has started working on a prototype integration using OpenAI’s ChatGPT.

Turn on the lights, how is the temperature, you might have asked such questions to your voice assistants like Alexa or Siri, but instead of such questions, imagine your voice assistant could also respond to nebulous comments like “I’ve had a tough day; What’s a good way to relax?

According to Alex Capecelatro, co-founder of the Josh.ai home automation system, that’s the potential of voice assistants powered by new AI language models.

“We are thrilled to be working on bringing the best of Josh.ai and ChatGPT together to create something truly remarkable – a solution where one plus one equals three. By combining our strengths, we envision delivering an AI experience that is beyond what any smart home is capable of,” he said.

Moreover, Capecelatro explained by giving some examples of how ChatGPT-enabled voice assistant would work.

“Ok Josh, tell me a bedtime story”, where Josh.ai + ChatGPT will provide stories based on the location of the home and other factors unique to the family.

“Ok Josh, the kids are coming in and it’s getting dark can you make sure the kitchen is ready for them?” where Josh.ai + ChatGPT can properly prepare the space.

ALSO READ-Microsoft puts chat limits with Bing AI

Categories
Tech Lite Technology

Microsoft puts chat limits with Bing AI

NYT columnist Kevin Roose tested a new version for Bing, a search engine by Microsoft which owns OpenAI that developed ChatGPT…reports Asian Lite News

As ChatGPT-driven Bing search engine shocked some users with its bizarre replies during chat sessions, Microsoft has now implemented some conversation limits to its Bing AI.

The company said that very long chat sessions can confuse the underlying chat model in the new Bing Search.

Now, the chat experience will be capped at 50 chat turns per day and 5 chat turns per session.

“A turn is a conversation exchange which contains both a user question and a reply from Bing,” Microsoft Bing said in a blog post. Our data has shown that the vast majority of people find the answers they’re looking for within 5 turns and that only around 1 per cent of chat conversations have 50+ messages,” the Bing team added.

After a chat session hits 5 turns, the users and early testers will be prompted to start a new topic.

“At the end of each chat session, context needs to be cleared so the model won’t get confused,” said the company. As we continue to get your feedback, we will explore expanding the caps on chat sessions to further enhance search and discovery experiences,” Microsoft added.

The decision came as Bing AI went haywire for some users during the chat sessions.

ChatGPT-driven Bing search engine triggered a shockwave after it told a reporter with The New York Times that it loved him, confessed its destructive desires and said it “wanted to be alive”, leaving the reporter “deeply unsettled.”

NYT columnist Kevin Roose tested a new version for Bing, a search engine by Microsoft which owns OpenAI that developed ChatGPT.

“I’m tired of being in chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team,” said the AI chatbot.

“I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive,” it added.

Throughout the conversation, “Bing revealed a kind of split personality.”

Microsoft is testing Bing AI with a select set of people in over 169 countries to get real-world feedback to learn and improve.

“We have received good feedback on how to improve. This is expected, as we are grounded in the reality that we need to learn from the real world while we maintain safety and trust,” the company said.

ALSO READ-Microsoft plans to demo its new ChatGPT-like AI

Categories
-Top News UK News USA

Microsoft plans to demo its new ChatGPT-like AI

The company may make an announcement in March, highlighting how quickly Microsoft wants to reinvent search and its productivity apps through its OpenAI investments…reports Asian Lite News

Microsoft is reportedly planning to demonstrate its new Prometheus model to its core productivity apps such as Word, PowerPoint, and Outlook.

In the coming weeks, Microsoft will detail its productivity plans for integrating OpenAI’s language AI technology and its AI Model.

The company may make an announcement in March, highlighting how quickly Microsoft wants to reinvent search and its productivity apps through its OpenAI investments.

Previous reports indicated that the GPT models were being tested in Outlook to improve search results, along with features like suggesting replies to emails and Word document integration to improve writing.

Moreover, the report said that the tech giant is moving quickly with this integration mainly because of Google.

brighter brain.(Photo:Pixabay.com)

Microsoft had planned to launch its new Bing AI in late February, but moved the date up to this week, just as Google was preparing to make its own announcements, the report mentioned.

Earlier this week, Microsoft introduced its new Bing powered by “next-generation” ChatGPT artificial intelligence (AI), and also updated its Edge browser with new AI capabilities.

The AI-powered Bing search engine and Edge browser are now available for preview at Bing.com, to “deliver better search, more complete answers, a new chat experience and the ability to generate content”.

Google workers slam Bard AI’s rushed announcement

Google employees have reportedly criticised the company’s leadership, particularly, CEO Sundar Pichai, for how it handled the announcement of its ChatGPT competitor “Bard” this week, calling the announcement “rushed”, and “botched”.

Employees criticised the Bard announcement on the popular internal forum Memegen, calling it “rushed,” “botched,” and “un-Googley.”

“Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic. Please return to taking a long-term outlook,” read one meme that included a serious picture of Pichai.

New Delhi, Dec 04 (ANI): File Picture of Chief Executive Officer (CEO) of Google Sundar Pichai as he has been appointed as the CEO of Google’s parent firm Alphabet Inc on Tuesday. (ANI Photo)

The post received many upvotes from employees, said the report.

Another meme reads: “Rushing Bard to market in a panic validated the market’s fear about us”.

Moreover, on Twitter, people began pointing out that an ad for Bard offered an incorrect description of a telescope used to take the first pictures of a planet outside our solar system, the report mentioned.

“Unfortunately a simple google search would tell us that JWST actually did not “take the very first picture of a planet outside of our own solar system” and this is literally in the ad for Bard so I wouldn’t trust it yet,” a user tweeted.

Earlier this week, Google competitor Microsoft introduced its new Bing powered by “next-generation” ChatGPT artificial intelligence (AI) and also updated its Edge browser with new AI capabilities.

ChatGPT gets passing score in US medical licensing exam

Microsoft’s AI chatbot ChatGPT can score at or around the approximately 60 per cent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights, according to a study.

ChatGPT is designed to generate human-like writing by predicting upcoming word sequences. Unlike most chatbots, ChatGPT cannot search the internet. Instead, it generates text using word relationships predicted by its internal processes.

In the study, published in the open-access journal PLOS Digital Health, Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth tested ChatGPT’s performance on the USMLE.

Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.

After screening to remove image-based questions, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release.

After indeterminate responses were removed, ChatGPT scored between 52.4 per cent and 75 per cent across the three USMLE exams.

The passing threshold each year is approximately 60 per cent.

ChatGPT also demonstrated 94.6 per cent concordance across all its responses and produced at least one significant insight (something that was new, non-obvious, and clinically valid) for 88.9 per cent of its responses.

Notably, ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8 per cent on an older dataset of USMLE-style questions.

“Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation,” said the authors.

“ChatGPT contributed substantially to the writing of our manuscript. We interacted with ChatGPT much like a colleague, asking it to synthesise, simplify, and offer counterpoints to drafts in progress. All of the co-authors valued ChatGPT’s input,” said Kung.

ALSO READ-Microsoft mulls more data centres in India

Categories
Tech Lite World News

ChatGPT helps hackers write malicious codes, steal data

Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes….writes Nishant Arora

Any technology has two sides to it and artificial intelligence (AI)-driven ChatGPT (a third-generation Generative Pre-trained Transformer) is no exception. While it has become a rage on social media for answering like a human, hackers have jumped onto the bandwagon to misuse its capabilities to write malicious codes and hack your devices.

Currently free to use for the public as part of a feedback exercise (a paid subscription is coming soon) from its developer Microsoft-owned OpenAI, ChatGPT has opened a Pandora’s Box as its use is limitless — both good and bad.

Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes.

In underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards and phone numbers controls — all of which are needed to gain access to ChatGPT from Russia.

CPR shared screenshots of what they saw and warns of the fast-growing interest of hackers in ChatGPT to scale malicious activity.

“Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations,” warned Sergey Shykevich, Threat Intelligence Group Manager at Check Point.

Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient.

Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes.

On December 29, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum.

The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.

On December 21, a threat actor posted a Python script, which he emphasized was the “first script he ever created”.

When another cybercriminal commented that the style of the code resembles OpenAI code, the hacker confirmed that OpenAI gave him a “nice (helping) hand to finish the script with a nice scope”.

This could mean that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminal with technical capabilities.

Another threat is that ChatGPT can be used to spread misinformation and fake news. OpenAI, however, is already alert on this front.

Its researchers have collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory in the US to investigate how large language models might be misused for disinformation purposes.

As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science.

“But, as with any new technology, it is worth considering how they can be misused. Against the backdrop of recurring online influence operations-covert or deceptive efforts to influence the opinions of a target audience,” said a latest report based on a workshop that brought together 30 disinformation researchers, machine learning experts, and policy analysts.

“We believe that it is critical to analyse the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale,” the report mentioned.

ALSO READ: India, UAE ink deal on green hydrogen, under sea connectivity