Categories
Lite Blogs

‘ChatGPT can perform analysis at fraction of human cost’

In some cases, the AI model managed to surpass the human data analysts in terms of the correctness of the figures and analysis…reports Asian Lite News

The cost of OpenAI’s GPT-4 is just 0.45 per cent of hiring a senior data analyst who earns around $90,000 annually, or 0.71 per cent of a junior-level employee, a study has revealed.

Using large language models (LLM) like GPT-4, that powers ChatGPT, in data analysis costs less than 1 per cent of hiring a human analyst while turning in comparable performances, according to researchers from Damo Academy, the research arm of Chinese e-commerce giant Alibaba Group, and Singapore’s Nanyang Technological University.

The study highlights the potential threat to job security amid increased adoption of generative artificial intelligence (AI), reports South China Morning Post.

The experiments showed that GPT-4 is also much faster than humans in completing the tasks.

“GPT-4 can also beat an entry-level human analyst in terms of performance, which was evaluated through a range of metrics including the correctness and fluency in charts and the insights they produced,” the report elaborated.

In some cases, the AI model managed to surpass the human data analysts in terms of the correctness of the figures and analysis.

“However, GPT-4 fell behind humans in terms of showing correct data in graphs, as well as presentation and formatting in some cases,” the results showed.

Despite errors with some figures, GPT-4 could still generate correct analysis, the study added.

According to global investment bank Goldman Sachs, nearly 300 million jobs could be lost to AI in the future.

A global economics research report from Goldman Sachs predicted that AI could automate 25 per cent of the entire labour market but can automate 46 per cent of tasks in administrative jobs, 44 per cent of legal jobs, and 37 per cent of architecture and engineering professions.

ALSO READ-ChatGPT app for iOS now expands to more countries

Categories
Tech Lite

ChatGPT app for iOS now expands to more countries

The company also introduced a new feature called shared links. This feature allows the users to create and share ChatGPT conversations with others…reports Asian Lite News

OpenAI has expanded the availability of its iOS app in more countries. It was earlier launched only for the US market. Users in 11 countries can now download the ChatGPT app in the Apple App Store including the US, Albania, Croatia, France, Germany, Ireland, Jamaica, Korea, New Zealand, Nicaragua, Nigeria and the UK.

ChatGPT on iOS is yet to arrive in India. “We will continue to roll out to more countries and regions in the coming weeks,” said the company.

The company also introduced a new feature called shared links. This feature allows the users to create and share ChatGPT conversations with others.

“Recipients of your shared link can either view the conversation or copy it to their own chats to continue the thread. This feature is currently rolling out to a small set of testers in alpha, with plans to expand to all users (including free) in the upcoming weeks,” said OpenAI.

The Microsoft-backed company also integrated the browsing feature — currently in beta for paid users — deeply with Bing.

“You can now click into queries that the model is performing. We look forward to expanding the integration soon,” said the company.

The ChatGPT users can also disable chat history on iOS.

Conversations started on your device when chat history is disabled won’t be used to improve our models, won’t appear in your history on your other devices, and will only be stored for 30 days.

ALSO READ-Fake ChatGPT apps exploiting users

Categories
Tech Lite

Fake ChatGPT apps exploiting users

While OpenAI provides basic ChatGPT functionality to users for free online, these apps charged anywhere from $10 per month to $70 per year…reports Asian Lite News

Experts have exposed several apps as ChatGPT-based chatbots that overcharge users and bring in thousands of dollars a month, a new report showed on Thursday.

According to cybersecurity company Sophos, there are a number of free apps that are available on Google Play and Apple App Store, but because they provide little functionality and are constantly ad-ridden, they entice unsuspecting users to subscribe for hundreds of dollars a year.

“With interest in AI and chatbots arguably at an all-time high, users are turning to the Apple App and Google Play Stores to download anything that resembles ChatGPT. These types of scam apps — what Sophos has dubbed ‘fleeceware’ — often bombard users with ads until they sign up for a subscription,” said Sean Gallagher, principal threat researcher, Sophos.

According to the report, experts investigated five of these ChatGPT fleeceware apps, all of which claimed to be based on ChatGPT’s algorithm.

For instance, developers of the app “Chat GBT” used ChatGPT’s name to boost their rankings in Google Play or App Store.

While OpenAI provides basic ChatGPT functionality to users for free online, these apps charged anywhere from $10 per month to $70 per year.

After a three-day free trial, the iOS version of “Chat GBT”, called Ask AI Assistant, charges $6 per week — or $312 per year — after earning the developers $10,000 in March alone, the report said.

Moreover, the report mentioned that another fleeceware-like app, Genie, which encourages users to sign up for a $7 weekly or $70 annual subscription, earned $1 million in the previous month.

“While some of the ChatGPT fleeceware apps included in this report have already been taken down, more continue to pop up – and it’s likely more will appear. The best protection is education. Users need to be aware that these apps exist and always be sure to read the fine print whenever hitting ‘subscribe’,” said Gallagher.

ALSO READ-New ChatGPT, Bard like AI tool to turn thoughts into text

Categories
Tech Lite Technology

New ChatGPT, Bard like AI tool to turn thoughts into text

It might help people who are mentally conscious yet unable to physically speak…reports Asian Lite News

US scientists have developed a new artificial intelligence (AI) system that can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text.

The system, developed by a team at the University of Texas at Austin relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.

It might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again, according to the team who published the study in the journal Nature Neuroscience.

Unlike other language decoding systems in development, this system called semantic decoder does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list.

Brain activity is measured using an functional MRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner.

Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.

“We’re getting the model to decode continuous language for extended periods of time with complicated ideas,” he added.

The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.

For example, in experiments, a participant listening to a speaker say: “I don’t have my driver’s licence yeta had their thoughts translated as, “She has not even started to learn to drive yet.”

The team also addressed questions about potential misuse of the technology in the study. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder.

Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said Jerry Tang, a doctoral student in computer science. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.

The system currently is not practical for use outside of the laboratory because of its reliance on the time needed on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

ALSO READ: Musk agrees to settle defamation suit by Indian-American Sikh

Categories
Tech Lite Technology

ChatGPT shows better empathy to patients than doctors

The study, published in JAMA Internal Medicine, compared written responses from physicians and those from ChatGPT to real-world health questions….reports Asian Lite News

ChatGPT outperforms physicians in providing high-quality, empathetic advice to patient questions, according to a study.

There has been widespread speculation about how advances in artificial intelligence (AI) assistants like ChatGPT could be used in medicine.

The study, published in JAMA Internal Medicine, compared written responses from physicians and those from ChatGPT to real-world health questions.

A panel of licensed health care professionals preferred ChatGPT’s responses 79 per cent of the time and rated ChatGPT’s responses as higher quality and more empathetic.

“The opportunities for improving health care with AI are massive,” said John W. Ayers from the Qualcomm Institute within the University of California San Diego. “AI-augmented care is the future of medicine,” he added.

In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors?

If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients and ease the ever-increasing burden on physicians.

“ChatGPT might be able to pass a medical licensing exam,” said Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute, “but directly answering patient questions accurately and empathetically is a different ballgame.”

According to researchers, while Covid-19 pandemic accelerated virtual health care adoption, making accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice that have contributed to record-breaking levels of physician burnout.

To understand how ChatGPT can help, the team randomly sampled 195 exchanges from Reddit’s AskDocs where a verified physician responded to a public question.

The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed health care professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT.

They compared responses based on information quality and empathy, noting which one they preferred. The panel of health care professional evaluators preferred ChatGPT responses to physician responses 79 per cent of the time.

ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses, the study showed.

Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians. The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians.

However, the team said, the ultimate solution is not throwing your doctor out altogether. “Instead, a physician harnessing ChatGPT is the answer for better and empathetic care,” said Adam Poliak, an assistant professor of Computer Science at Bryn Mawr College.

ALSO READ: ‘Destabilising’: Russia slams US-ROK nuke deal

Categories
Tech Lite

ChatGPT fails when it comes to accounting

On a 11.3 per cent of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing…reports Asian Lite News

AI chatbot ChatGPT is still no match for humans when it comes to accounting and while it is a game changer in several fields, the researchers say the AI still has work to do in the realm of accounting.

Microsoft-backed OpenAI has launched its newest AI chatbot product, GPT-4 which uses machine learning to generate natural language text, passed the bar exam with a score in the 90th percentile, passed 13 of 15 advanced placement (AP) exams and got a nearly perfect score on the GRE Verbal test.

“It’s not perfect; you’re not going to be using it for everything,” said Jessica Wood, currently a freshman at Brigham Young University (BYU) in the US. “Trying to learn solely by using ChatGPT is a fool’s errand.”

Researchers at BYU and 186 other universities wanted to know how OpenAI’s tech would fare on accounting exams. They put the original version, ChatGPT, to the test.

“We’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening,” said lead study author David Wood, a BYU professor of accounting.

Although ChatGPT’s performance was impressive, the students performed better.

Students scored an overall average of 76.7 per cent, compared to ChatGPT’s score of 47.4 per cent.

On a 11.3 per cent of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing.

But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type, said the study published in the journal Issues in Accounting Education.

When it came to question type, ChatGPT did better on true/false questions and multiple-choice questions, but struggled with short-answer questions.

In general, higher-order questions were harder for ChatGPT to answer.

“ChatGPT doesn’t always recognise when it is doing math and makes nonsensical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly,” the study found.

ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s descriptions are accurate, but it will then proceed to select the wrong multiple-choice answer.

“ChatGPT sometimes makes up facts. For example, when providing a reference, it generates a real-looking reference that is completely fabricated. The work and sometimes the authors do not even exist,” the findings showed.

That said, authors fully expect GPT-4 to improve exponentially on the accounting questions posed in their study.

ALSO READ-Musk’s ‘TruthGPT’ to rival ChatGPT

Categories
-Top News Tech Lite World News

Musk’s ‘TruthGPT’ to rival ChatGPT

The revelation comes as the billionaire has created a new company called X.AI which will promote artificial intelligence (AI) in the ChatGPT era….reports Asian Lite News

After slamming OpenAI’s ChatGPT, Elon Musk is now working on “TruthGPT,” a ChatGPT alternative that will act as a “maximum truth-seeking AI.”

In an interview with Fox News, Musk said that an alternative approach to AI creation is needed to “avoid the destruction of humanity”.

“I’m going to start something which I call ‘TruthGPT’ or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk was quoted as saying.

“And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe,” the Twitter CEO added.

ChatGPT

In February, Musk for the first time tweeted that what we need is a “TruthGPT.”

The revelation comes as the billionaire has created a new company called X.AI which will promote artificial intelligence (AI) in the ChatGPT era.

Incorporated in Nevada, Texas, the company has Musk as the only listed director, and Jared Birchall, director of Musk’s family office, as secretary, according to a filing.

Musk aims to create an AI firm to take on Microsoft-backed OpenAI.

In recent months, ChatGPT and GPT-4 have become a rage worldwide.

In March, several top entrepreneurs and AI researchers, including Musk and Steve Wozniak, Co-founder of Apple, wrote an open letter, asking all AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least six months.

ALSO READ: Musk dubs BBC as govt-funded media

Categories
-Top News India News Tech Lite

Debate rages on kids’ exposure to ChatGPT  

In Australia, almost every state and territory department of education blocked ChatGPT on the school internet networks…reports Asian Lite News

As conversational artificial intelligence (AI) becomes a talking point on social media, representatives from Indian schools and experts on Tuesday expressed divisive concerns over children’s exposure to the AI chatbot in the classrooms.

Schools around the world have already banned ChatGPT, citing concerns that the AI tool which has been helping people write poems, essays and even work papers, can provide inaccurate information and enable cheating.

According to Nikita Tomar Mann, Principal at Indraprastha Global School in Noida, as amazing and fascinatingly incredible as it appears, “ChatGPT is still at a nascent stage” for us to fully comprehend its ramifications.

“Schools must keep it at bay for the time being, till such time we understand its need and utility at the school level,” Mann said.

Children should rather be trained to be doing their own research, assimilate information, and construct their own knowledge out of it.

“After all, it isn’t prudent to give up our unique thought processes as humans, to AI,” she noted.

According to the school authorities, chatGPT is not a reliable source of information.

A growing number of schools at all levels in the US banned ChatGPT, prohibiting students from using it on school servers or even in aid of activities outside of school grounds.

In Australia, almost every state and territory department of education blocked ChatGPT on the school internet networks.

According to educationist Meeta Sengupta, ChatGPT can be a challenge for educators as they have to help kids to learn how to ask good questions.

“It can be used as a tool for asking questions; build up critical thinking skills in children. Though it is not reliable, children should not be denied using it because it is a tech of the future,” said Sengupta, adding that in the coming days, it will become more advanced.

Dr Sibi Shaji, Registrar, Garden City University (GCU) Bengaluru, told that ChatGPT should be allowed as it allows creative thinking and is more of “experiential learning”.

There are other concerns as well with the AI chatbots.

Microsoft-owned OpenAI has now blocked access to its AI chatbot ChatGPT in Italy in response to an order from the local data protection authority to halt processing Italians’ data for the ChatGPT service.

In its order, the Italian regulator Garante said it’s concerned that the ChatGPT maker is breaching the European Union’s (EU) General Data Protection Regulation (GDPR), claiming that OpenAI has unlawfully processed the data of Italian citizens.

Several top entrepreneurs and AI researchers, including Tesla and Twitter CEO Elon Musk and Steve Wozniak, Co-founder of Apple, have also written an open letter, asking all AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least 6 months.

Arguing that AI systems with human-competitive intelligence can pose profound risks to society and humanity, more than 1,100 global AI researchers and executives signed the open letter to pause “all giant AI experiments”. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” they wrote.

ALSO READ-Italy bans ChatGPT citing data breach

Categories
-Top News EU News Technology

Italy bans ChatGPT citing data breach

Italy’s privacy watchdog said a data breach was reported affecting ChatGPT users’ conversations and information on payments by subscribers to the service

Authorities in Italy have blocked chatbot ChatGPT with immediate effect in the country.

With this, Italy becomes the first European country to block the advanced Artificial Intelligence software, which is capable of emulating and elaborate human conversations among other actions. Italian data protection authority on Friday (local time) has said that it is blocking the Microsoft backed chatbot developed by US start up OpenAI and will investigate whether it complied with the country’s General Data Protection Regulation.

A data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service had been reported on March 20, the Italian watchdog said.

Several countries like China, Russia, Iran and North Korea have blocked ChatGPT, which came into existence in November 2022.

The Italian Data Protection Authority (Garante per la protezione dei dati personali) said it has opened an investigation against ChatGPT and the US Company OpenAI.

“No way for ChatGPT to continue processing data in breach of privacy laws. The Italian SA imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI, the US-based company developing and managing the platform. An inquiry into the facts of the case was initiated as well,” the Authority stated as per a release on its website.

The authority said that it noted the lack of information to users and all interested parties whose data is collected by OpenAI, but above all the absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of “train” the algorithms underlying the operation of the platform.

The Italian SA emphasizes in its order that the lack of whatever age verification mechanism exposes children to receiving responses that are absolutely inappropriate to their age and awareness, even though the service is allegedly addressed to users aged above 13 according to OpenAI’s terms of service.

OpenAI is not established in the EU, however, it has designated a representative in the European Economic Area.

The Italian data protection authority said that OpenAI has to notigy within 20 days of the measures implemented to comply with the order, otherwise, a fine of up to EUR 20 million or 4 per cent of the total worldwide annual turnover may be imposed.

In its order, the Italian SA highlights that no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.

As confirmed by the tests carried out so far, the information made available by ChatGPT does not always match factual circumstances, so inaccurate personal data are processed, the Data Protection Authority of Italy said. (ANI)

ALSO READ: ChatGPT bug may have exposed payment information

Categories
-Top News Social Media Tech Lite

ChatGPT bug may have exposed payment information

Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users…reports Asian Lite News

OpenAI, the creator of ChatGPT, has admitted that some users’ payment information may have been exposed earlier this week when it took ChatGPT offline owing to a bug.

According to the company, the Microsoft-owned company took ChatGPT offline due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history.

“It was also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time,” said the company.

The bug has been patched and ChatGPT service and its chat history feature, with the exception of a few hours of history, have been restored.

However, upon deeper investigation, OpenAI discovered that the same bug may have caused the unintentional visibility of “payment-related information of 1.2 per cent of the ChatGPT Plus subscribers who were active during a specific nine-hour window”.

“In the hours before we took ChatGPT offline, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time,” the company revealed.

Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users.

These emails contained the last four digits of another users’ credit card number, but full credit card numbers did not appear.

“It’s possible that a small number of subscription confirmation emails might have been incorrectly addressed prior to March 20, although we have not confirmed any instances of this,” OpenAI further said.

The company said it has reached out to notify affected users that their payment information may have been exposed.

“We are confident that there is no ongoing risk to users’ data,” it added, apologising again to users and to the entire ChatGPT community.

The bug was discovered in the Redis client open-source library called “redis-py”.

ALSO READ-AI one of biggest risks to civilisation, warns Elon Musk