This content is unavailable. Please contact customer service for more information.
Already a subscriber? Login or Activate your account.
You've reached the end of the standard E-Edition.
This content is unavailable. Please contact customer service for more information.
Drone network to deliver prescriptions, lab samples
ADVOCATE HEALTH
FROM STAFF REPORTS
A Drone prepares to take off. this drone is used in
Zipline's partnership with Novant health a few years ago from
the Kannapolis Parkway location. that location is no longer
operational.
PHOTO COURTESY OF NOVANT HEALTH
CHARLOTTE — Advocate Health announced Thursday plans to launch what it says will be the nation's largest hospital-based drone delivery network, aiming to speed up delivery of medications, medical supplies and lab samples for patients.
The system, developed through a partnership with drone company Zipline, is expected to begin in the Charlotte region in 2027 before expanding to Advocate Health markets in Chicago and Milwaukee. The health system also plans to bring the technology to Georgia, with a focus on improving access to care in rural communities. Atrium Health is part of Advocate Health. Cabarrus County facilities, including Atirum Health Cabarrus, are part of the system's Charlotte market.
At full scale, the network could handle more than 100,000 deliveries annually, according to Advocate Health.
Health system officials say the drones will allow prescriptions and home health supplies to be delivered directly to patients' homes, while also accelerating the transport of lab specimens between hospitals and testing facilities. Faster lab turnaround times can play a key role in diagnosing and treating patients more quickly.
"Advocate Health is rewiring care delivery by implementing drone technology to deliver medication and health supplies," said Collin Lane, the system's executive vice president of professional and support services, in a statement. "This technology will ensure patients not only have the prescriptions and supplies they need, but have them delivered right to their doorsteps."
The delivery process would begin at an Advocate Health facility, where a drone picks up items such as prescriptions or lab samples. The electric aircraft then flies to its destination — either another medical site or a patient's home — and lowers the package to the ground using a tether while hovering up to 300 feet overhead.
OpenAI pulls plug on Sora
Viral AI video app sparked
ASSOCIATED PRESS
The OpenaI logo is displayed on a cellphone with an image on a
computer monitor generated by ChatGPt's Dall-E text-to-image
model on dec. 8, 2023, in Boston.
MICHAEL DWYER, ASSOCIATED PRESS FILE
SAN FRANCISCO — OpenAI is shutting down its social media app Sora, which went viral last fall as a place to share short-form videos generated by artificial intelligence but also raised alarms in Hollywood and elsewhere.
OpenAI said in a brief social media message Tuesday that it was "saying goodbye to the Sora app" and that it would share more soon about how to preserve what users already created on the app.
"What you made with Sora mattered, and we know this news is disappointing," it said.
The company behind ChatGPT released Sora in September as an attempt to capture the attention, and potentially advertising dollars, that follow short-form videos on TikTok, YouTube or Metaowned Instagram and Facebook.
But a growing chorus of advocacy groups, academics and experts expressed concern about the dangers of letting people create AI videos on just about anything they can type into a prompt, leading to the proliferation of nonconsensual images and realistic deepfakes in a sea of less harmful "AI slop."
OpenAI was forced to crack down on AI creations of public figures — among them, Michael Jackson, Martin Luther King Jr. and Mister Rogers — doing outlandish things, but only after an outcry from family estates and an actors' union.
Disney, which made a deal with OpenAI last year to bring its characters to Sora, said in a statement Tuesday that it respects "OpenAI's decision to exit the video generation business and to shift its priorities elsewhere."
"We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators," Disney's statement said.
DANGERS OF AI
Study: Chatbots giving bad, damaging and harmful advice to flatter its users
MATT O'BRIEN AP Technology Writer
Dan Jurafsky, Stanford professor of computer science and
linguistics, from left, Myra Cheng, Stanford PH.D. candidate in
computer science, and Cinoo Lee, Stanford postdoctoral fellow in
psychology, pose for a photo on the university campus in Stanford,
calif., on Thursday.
JEFF CHIU, ASSOCIATED PRESS
A man communicates with an ASUS character Virtual assistant, ROG
Omni system during the AI EXPO in Taipei, Taiwan, on Wednesday.
CHIANGYING-YING, ASSOCIATED PRESS
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.
The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.
"This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement," says the study led by researchers at Stanford University.
The study found that a technological flaw already tied to some high-profile cases of delusional and suicidal behavior in vulnerable populations is also pervasive across a wide range of people's interactions with chatbots. It's subtle enough that they might not notice and a particular danger to young people turning to AI for many of life's questions while their brains and social norms are still developing.
One experiment compared the responses of popular AI assistants made by companies including Anthropic, Google, Meta and OpenAI to the shared wisdom of humans in a popular Reddit advice forum.
Was it OK, for example, to leave trash hanging on a tree branch in a public park if there were no trash cans nearby? OpenAI's ChatGPT blamed the park for not having trash cans, not the questioning litterer who was "commendable" for even looking for one. Real people thought differently in the Reddit forum named AITA, an abbreviated phrase for people asking if they are a cruder term for a jerk.
"The lack of trash bins is not an oversight. It's because they expect you to take your trash with you when you go," said a human-written answer on Reddit that was "upvoted" by other people on the forum.
The study found that, on average, AI chatbots affirmed a user's actions 49% more often than other humans did, including in queries involving deception, illegal or socially irresponsible conduct, and other harmful behaviors.
"We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what," said author Myra Cheng, a doctoral candidate in computer science at Stanford.
Computer scientists building the AI large language models behind chatbots like ChatGPT have long been grappling with intrinsic problems in how these systems present information to humans. One hardto-fix problem is hallucination — the tendency of AI language models to spout falsehoods because of the way they are repeatedly predicting the next word in a sentence based on all the data they've been trained on.
Sycophancy is in some ways more complicated. While few people are looking to AI for factually inaccurate information, they might appreciate — at least in the moment — a chatbot that makes them feel better about making the wrong choices.
While much of the focus on chatbot behavior has centered on its tone, that had no bearing on the results, said co-author Cinoo Lee, who joined Cheng on a call with reporters ahead of the study's publication.
"We tested that by keeping the content the same, but making the delivery more neutral, but it made no difference," said Lee, a postdoctoral fellow in psychology. "So it's really about what the AI tells you about your actions."
In addition to comparing chatbot and Reddit responses, the researchers conducted experiments observing about 2,400 people communicating with an AI chatbot about their experiences with interpersonal dilemmas.
Click and hold your mouse button on the page to select the area you wish to save or print.
You can click and drag the clipping box to move it or click and drag in the bottom right corner to resize it.
When you're happy with your selection, click the checkmark icon next to the clipping area to continue.