Categories
Lite Blogs Tech Lite Technology

ChatGPT maker OpenAI likely to go bankrupt by 2024

A recent report by Investopedia claimed that it is too early for any AI leading company, like OpenAI, Anthropic, or Inflection, to head into the initial public offering (IPO) market…reports Asian Lite News

ChatGPT maker OpenAI is likely to go bankrupt by the end of 2024 if it doesn’t get more funding soon, according to media reports.

Analytics India Magazine reported that ChatGPT website has seen a continuous user decline in the first six months of the year.

The users declined to 1.5 billion in July from 1.7 billion in June and 1.9 billion in May, revealed data from analytics company SimilarWeb. This also doesn’t include APIs or the ChatGPT mobile app.

While one theory holds that in May students were out of school, the other says that people started building their own bots, instead of using the original offering.

“I am no longer allowed to use ChatGPT at work, but we have developed our own internal model based on ChatGPT,” a user said in a tweet.

Another issue is that after OpenAI developed ChatGPT, which has created a ruckus in the job market over fears that it may replace human creativity, its losses have doubled to around $540 million last year, according to a report by The Information in May.

This comes even as ChatGPT reportedly costs a whopping $700,000 (Rs 5.80 crore) per day to operate. Even OpenAI CEO Sam Altman had admitted in a tweet that “compute costs are eye-watering”.

A recent report by Investopedia claimed that it is too early for any AI leading company, like OpenAI, Anthropic, or Inflection, to head into the initial public offering (IPO) market.

“It is because it takes at least 10 years of operation and $100 million in revenue for an IPO to be successful,” the report said. In addition, billionaire Elon Musk is also increasing pressure with claims of building a rival chatbot.

While Microsoft-backed OpenAI has projected an annual revenue of $200 million in 2023, and aims to reach $1 billion in 2024, its losses are mounting. It is majorly surviving on Microsoft’s $10 billion investment.

ALSO READ-Researchers ‘hypnotise’ ChatGPT into hacking

Categories
-Top News Tech Lite UK News

OpenAI to roll out ‘huge set’ of ChatGPT updates   

Reacting to his post, a number of users praised the new ChatGPT updates. A user wrote, “Great updates. Would love the ability to search through the history”…reports Asian Lite News

Microsoft-owned OpenAI’s first developer advocate and developer relations expert — Logan Kilpatrick, has posted on X (formerly Twitter) that a “huge set of ChatGPT updates are rolling out over the next week”.

Among the new features Kilpatrick highlighted are example prompts, suggested replies and follow-up questions, a default GPT-4 setting so that paying ChatGPT Plus subscribers don’t have to toggle on the latest/most advanced publicly available OpenAI large language model (LLM) every time they start a new chat, support for multiple file uploads for all Plus users when using the OpenAI Code Interpreter plugin, and more.

Reacting to his post, a number of users praised the new ChatGPT updates. A user wrote, “Great updates. Would love the ability to search through the history”.

“Everyone wants this and I hope it lands eventually! Search is live in iOS btw,” Kilpatrick replied. “These are great changes, congrats to the team! If possible, please consider making the pages translated and localised. Most people know that we can interact with it in many languages, but the landing pages and interaction pages are a major hurdle to people that don’t speak English,” another user said.

One more user commented, “Very solid updates. Love the default/suggested prompts! Text-based interfaces are very powerful but users still don’t like looking at an empty text box. Much better for AI-powered apps to present users with context-aware suggestions that can be customised as needed”. Last month, OpenAI introduced a new ‘customised instructions’ feature for ChatGPT, that allows users to share anything with the artificial intelligence (AI)-chatbot for future conversations.

“Custom instructions are currently available in Beta for Plus users, and we plan to roll out to all users soon,” the company said in an article. Users can edit or delete custom instructions at any time for new conversations.

ALSO READ-ChatGPT fined for exposing personal info of 687 S. Koreans

Categories
-Top News Asia News Tech Lite

ChatGPT fined for exposing personal info of 687 S. Koreans

A total of 687 users in South Korea have been confirmed to be among those affected by the exposure…reports Asian Lite News

South Korea’s Personal Information Protection Commission (PIPC) on Thursday imposed a fine of 3.6 million won ($2,829) on OpenAI, the operator of the generative chatbot ChatGPT, for exposing personal information of its 687 citizens.

According to OpenAI, a now-patched bug in an open-source library on ChatGPT created a caching issue in March. It caused an unintentional visibility of payment information of ChatGPT Plus subscribers during a nine-hour window, including first and last names, email addresses, the last four digits of credit card numbers and credit card expiration dates, Yonhap reported.

A total of 687 users in South Korea have been confirmed to be among those affected by the exposure.

The PIPC said it has fined OpenAI for breaching its duty to report a leakage to authorities within 24 hours of finding it. But the privacy watchdog concluded the company cannot be held responsible for lax personal information protection measures.

The watchdog has also recommended OpenAI take measures to prevent a recurrence of the incident, comply with South Korea’s personal information protection law and cooperate actively with the commission’s prior inspection activities, it said.

ALSO READ-Edtech firms in a real fix

Categories
-Top News Tech Lite USA

US med school experiments on Chat-GPT 4

Chat-GPT 4 also provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases, revealed the findings, published in JAMA…reports Asian Lite News

In a significant experiment, a US medical school used Open AI’s Chat-GPT 4 to see if it can make accurate diagnoses in challenging medical cases.

Physician-researchers at Beth Israel Deaconess Medical Center (BIDMC) in Boston found that Chat-GPT 4 selected the correct diagnosis nearly 40 per cent of the time.

Chat-GPT 4 also provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases, revealed the findings, published in JAMA.

“Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardised medical examinations,” said Adam Rodman, co-director of the Innovations in Media and Education Delivery (iMED) Initiative at BIDMC.

“We wanted to know if such a generative model could ‘think’ like a doctor, so we asked one to solve standardised complex diagnostic cases used for educational purposes. It did really, really well,” said Rodman, also an instructor in medicine at Harvard Medical School.

To assess the chatbot’s diagnostic skills, Rodman and colleagues used clinicopathological case conferences (CPCs), a series of complex and challenging patient cases including relevant clinical and laboratory data, imaging studies, and histopathological findings published in the New England Journal of Medicine for educational purposes.

Evaluating 70 CPC cases, the artificial intelligence exactly matched the final CPC diagnosis in 27 (39 per cent) of cases. In 64 per cent of the cases, the final CPC diagnosis was included in the AI’s differential — a list of possible conditions that could account for a patient’s symptoms, medical history, clinical findings and laboratory or imaging results.

“While chatbots cannot replace the expertise and knowledge of a trained medical professional, generative AI is a promising potential adjunct to human cognition in diagnosis,” said first author Zahir Kanjee, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School.

“It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking,” he said.

While the study adds to a growing body of literature demonstrating the promising capabilities of AI technology, more research is needed on its optimal uses, benefits and limits, importantly on privacy issues to understand how these new AI models might transform health care delivery.

ALSO READ-Samsung may integrate ChatGPT into Internet Browser app

Categories
Tech Lite Technology USA

ChatGPT officiates wedding in absence of priest

This is not the first time that an AI chatbot like ChatGPT has done something unusual like this…reports Asian Lite News

In an unprecedented but heartwarming event, OpenAI’s chatbot ChatGPT stepped up to officiate the wedding of a US couple when faced with the unanticipated absence of a priest.

Reece Wiench and Deyton Truitt celebrated their wedding last weekend with the voice of the ChatGPT AI app leading the way, report Fox News.

“Thank you all for joining us today to celebrate the extraordinary love and unity of Reece Wiench and Deyton Truitt,” the chatbot said at the couple’s wedding last month.

Wiench and Truitt said that they planned their wedding in five days because Truitt was about to deploy for the Army and Wiench wanted to join him after basic training.

In the US-based Colorado, there is no requirement for a licensed marriage official to officiate ceremonies, therefore, the bride’s father, Stephen Wiench, came up with the idea of using a more accessible and cost-effective officiant option.

The chatbot was at first hesitant to conduct the ceremony, according to the report.

“It said ‘no’ at first. ‘I can’t do this, I don’t have eyes, I don’t have a body. I can’t officiate at your wedding,'” Wiench was quoted as saying. The couple persisted and provided personal information about themselves to the chatbot, which was woven into ChatGPT’s remarks during the ceremony.

This is not the first time that an AI chatbot like ChatGPT has done something unusual like this.

Last week, a woman revealed that her long-time client stopped working with her after discovering that she was using a ChatGPT-like artificial intelligence (AI) to write content.

Last month, a US judge sanctioned the lawyer who submitted a legal brief written by ChatGPT, which included citations of non-existent court opinions and fake quotes.

ALSO READ-ChatGPT 4 excels at picking the right imaging tests

Categories
Tech Lite

ChatGPT 4 excels at picking the right imaging tests

They asked the AI in an open-ended way and by giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version…reports Asian Lite News

OpenAI’s ChatGPT can support the clinical decision-making process, including when picking the correct radiological imaging tests for breast cancer screening or breast pain, finds a study.

The study by investigators from Mass General Brigham in the US, suggests that large language models have the potential to assist decision-making for primary care doctors and referring providers in evaluating patients and ordering imaging tests for breast pain and breast cancer screenings. Their results are published in the Journal of the American College of Radiology.

“In this scenario, ChatGPT’s abilities were impressive,” said corresponding author Marc D. Succi, associate chair of Innovation and Commercialisation at Mass General Brigham Radiology and executive director of the MESH Incubator.

“I see it acting like a bridge between the referring healthcare professional and the expert radiologist — stepping in as a trained consultant to recommend the right imaging test at the point of care, without delay.

“This could reduce administrative time on both referring and consulting physicians in making these evidence-backed decisions, optimise workflow, reduce burnout, and reduce patient confusion and wait times,” Succi said.

In the study, the researchers asked ChatGPT 3.5 and 4 to help them decide what imaging tests to use for 21 made-up patient scenarios involving the need for breast cancer screening or the reporting of breast pain using the appropriateness criteria.

They asked the AI in an open-ended way and by giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version.

ChatGPT 4 outperformed 3.5, especially when given the available imaging options. For example, when asked about breast cancer screenings, and given multiple choice imaging options, ChatGPT 3.5 answered an average of 88.9 per cent of prompts correctly, and ChatGPT 4 got about 98.4 per cent right.

“This study doesn’t compare ChatGPT to existing radiologists because the existing gold standard is actually a set of guidelines from the American College of Radiology, which is the comparison we performed,” Succi said.

“This is purely an additive study, so we are not arguing that the AI is better than your doctor at choosing an imaging test but can be an excellent adjunct to optimise a doctor’s time on non-interpretive tasks.”

ALSO READ-Benz adds ChatGPT to voice control of its vehicles

Categories
Tech Lite

ChatGPT on iOS gets ‘Drag and Drop’ support

OpenAI has the ChatGPT app only for iOS and has an Android version in the plans, which it promised to bring soon to the market…reports Asian Lite News

OpenAI on Thursday updated the ChatGPT application on iOS and iPadOS with the introduction of ‘Drag and Drop’ support.

With this new feature, individual messages can now be dragged and dropped into other applications.

The application now also takes advantage of the entire iPad screen with the new update. Moreover, the company introduced Siri and Shortcuts integration for ChatGPT, which means that the app can now be used directly with Siri and Shortcuts.

Last month, the ChatGPT app was released for iOS users in India as the company expanded the availability of the app to more countries.

Currently, OpenAI has the ChatGPT app only for iOS and has an Android version in the plans, which it promised to bring soon to the market.

Meanwhile, the company introduced a feature called ‘shared links’ for the app, which allows the users to create and share ChatGPT conversations with others.

“Recipients of your shared link can either view the conversation or copy it to their own chats to continue the thread. This feature is currently rolling out to a small set of testers in alpha, with plans to expand to all users (including free) in the upcoming weeks,” OpenAI said.

ALSO READ-ChatGPT performs poorly at US’ urologists exam

Categories
-Top News Tech Lite USA

ChatGPT performs poorly at US’ urologists exam

The explanations provided by ChatGPT were longer than those provided by SASP, but “frequently redundant and cyclical in nature”, according to the authors…reports Asian Lite News

The much-acclaimed OpenAI’s ChatGPT chatbot has failed a urologist exam in the US, according to a study.

This comes at a time of growing interest in the potential role of artificial intelligence (AI) technology in medicine and healthcare.

The study, reported in the journal Urology Practice, showed that ChatGPT achieved less than a 30 per cent rate of correct answers on the American Urologist Association’s widely used Self-Assessment Study Program for Urology (SASP).

“ChatGPT not only has a low rate of correct answers regarding clinical questions in urologic practice, but also makes certain types of errors that pose a risk of spreading medical misinformation,” said Christopher M. Deibert, from University of Nebraska Medical Center.

The AUA’s Self-Assessment Study Program (SASP) is a 150-question practice examination addressing the core curriculum of medical knowledge in urology.

The study excluded 15 questions containing visual information such as pictures or graphs.

Overall, ChatGPT gave correct answers to less than 30 per cent of SASP questions, 28.2 per cent of multiple-choice questions and 26.7 per cent of open-ended questions.

The chatbot provided “indeterminate” responses to several questions. On these questions, accuracy was decreased when the LLM model was asked to regenerate its answers.

For most open-ended questions, ChatGPT provided an explanation for the selected answer.

The explanations provided by ChatGPT were longer than those provided by SASP, but “frequently redundant and cyclical in nature”, according to the authors.

“Overall, ChatGPT often gave vague justifications with broad statements and rarely commented on specifics,” Dr. Deibert said.

Even when given feedback, “ChatGPT continuously reiterated the original explanation despite it being inaccurate”.

The researchers suggest that while ChatGPT may do well on tests requiring recall of facts, it seems to fall short on questions pertaining to clinical medicine, which require “simultaneous weighing of multiple overlapping facts, situations and outcomes”.

“Given that LLMs are limited by their human training, further research is needed to understand their limitations and capabilities across multiple disciplines before it is made available for general use,” Dr. Deibert said.

“As is, utilisation of ChatGPT in urology has a high likelihood of facilitating medical misinformation for the untrained user.”

ALSO READ-US judge orders lawyers not to use ChatGPT-drafted content  

Categories
-Top News USA

US judge orders lawyers not to use ChatGPT-drafted content  

Lawyer Steven A. Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research…reports Asian Lite News

A US federal judge has categorically told lawyers that he will not allow any AI-generated content in his court.

Texas federal judge Brantley Starr said that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence”, or if it was, that it was checked “by a human being”, reports TechCrunch.

“All attorneys appearing before the court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” read the standing order.

According to the judge, these AI platforms are incredibly powerful and have many uses in the law — form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument.

“But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations,” the judge’s order further read.

Last week, ChatGPT had fooled a lawyer into believing that citations given by the AI chatbot in a case against Colombian airline Avianca were real while they were, in fact, bogus.

Lawyer Steven A. Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research.

After the opposing counsel pointed out the non-existent cases, US District Judge Kevin Castel confirmed that six of the submitted cases “appear to be bogus judicial decisions with bogus quotes and bogus internal citations”.

The judge set up a hearing as he considered sanctions for the plaintiff’s lawyers.

Last month, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

“ChatGPT recently issued a false story accusing me of sexually assaulting students,” Turley posted in a tweet.

ALSO READ-‘ChatGPT can perform analysis at fraction of human cost’

Categories
Lite Blogs

‘ChatGPT can perform analysis at fraction of human cost’

In some cases, the AI model managed to surpass the human data analysts in terms of the correctness of the figures and analysis…reports Asian Lite News

The cost of OpenAI’s GPT-4 is just 0.45 per cent of hiring a senior data analyst who earns around $90,000 annually, or 0.71 per cent of a junior-level employee, a study has revealed.

Using large language models (LLM) like GPT-4, that powers ChatGPT, in data analysis costs less than 1 per cent of hiring a human analyst while turning in comparable performances, according to researchers from Damo Academy, the research arm of Chinese e-commerce giant Alibaba Group, and Singapore’s Nanyang Technological University.

The study highlights the potential threat to job security amid increased adoption of generative artificial intelligence (AI), reports South China Morning Post.

The experiments showed that GPT-4 is also much faster than humans in completing the tasks.

“GPT-4 can also beat an entry-level human analyst in terms of performance, which was evaluated through a range of metrics including the correctness and fluency in charts and the insights they produced,” the report elaborated.

In some cases, the AI model managed to surpass the human data analysts in terms of the correctness of the figures and analysis.

“However, GPT-4 fell behind humans in terms of showing correct data in graphs, as well as presentation and formatting in some cases,” the results showed.

Despite errors with some figures, GPT-4 could still generate correct analysis, the study added.

According to global investment bank Goldman Sachs, nearly 300 million jobs could be lost to AI in the future.

A global economics research report from Goldman Sachs predicted that AI could automate 25 per cent of the entire labour market but can automate 46 per cent of tasks in administrative jobs, 44 per cent of legal jobs, and 37 per cent of architecture and engineering professions.

ALSO READ-ChatGPT app for iOS now expands to more countries