iStockBy: JON HAWORTH, ABC News (NEW YORK) — A pandemic of the novel coronavirus has now killed more than 342,000 people worldwide.Over 5.3 million people across the globe have been diagnosed with COVID-19, the disease caused by the new respiratory virus, according to data compiled by the Center for Systems Science and Engineering at Johns Hopkins University. The actual numbers are believed to be much higher due to testing shortages, many unreported cases and suspicions that some governments are hiding the scope of their nations’ outbreaks.The United States is the worst-affected country in the world, with more than 1.6 million diagnosed cases and at least 96,046 deaths.Today’s biggest developments:– US nears 100,000 deaths– Wuhan lab director calls virus leak claims ‘pure fabrication’– Michigan and Missouri announce change in reporting of COVID-19 testing data– Brazil passes Russia, now has second most confirmed cases globally– Scientist claims COVID-19 ‘disappearing’ so fast, Oxford vaccine has ‘only 50% chance of working’Here’s how the news is developing today. All times Eastern.9:20 a.m.: Success of reopening will depend on following guidance, Birx saysDr. Deborah Birx, one of the leaders of the government’s response to the virus, pinned the success of reopening efforts on the public’s ability to follow the direction of public health experts.“I think it’s our job as public health officials, every day to be informing the public that what puts them at risk,” said Birx, the White House coronavirus response coordinator, in an interview on ABC’s “This Week” Sunday. “We’ve learned a lot about this virus, but we now need to translate that learning into real change behavior that stays with us so we can continue to drive down the number of cases.”“This only works if we all follow the guidelines and protect one another,” Birx continued.Despite the U.S. death toll approaching 100,000, Birx struck a cautiously optimistic tone on Friday during a White House press conference — her first in several weeks — sharing approval of increased public activity over Memorial Day weekend, provided precautionary measures, like social distancing, continue to be adhered to.“You can go to the beaches if you stay 6 feet apart,” she said. “But remember that is your space, and that is the space you need to protect to ensure you are socially distancing for others.”9:05 a.m.: Coronavirus ‘is not yet contained,’ FDA commissioner saysAs states begin to open up during Memorial Day weekend, the FDA commissioner reminded the public to continue to protect themselves from exposure to the coronavirus.“I again remind everyone that the coronavirus is not yet contained,” Dr. Stephen Hahn said in a Twitter post on Sunday morning. “It is up to every individual to protect themselves and their community. Social distancing, hand washing and wearing masks protect us all.”6:15 a.m.: Scientist claims COVID-19 ‘disappearing’ so fast, Oxford vaccine has ‘only 50% chance of working’The professor co-leading the vaccine development says the virus is disappearing so quickly in Britain, the vaccine trial being run by Oxford University only has a 50% chance of success.The trial depends on having enough vaccinated people to essentially go out into the wild and catch the virus in order for the vaccine to be tested.Earlier in the year when the infection rate was much higher, researchers expected an 80% chance of an effective vaccine. That’s now dropped to 50% according to Professor Adrian Hill.“It’s a race against the virus disappearing, and against time”, Hill told the Telegraph newspaper in the U.K. “At the moment, there’s a 50% chance that we get no result at all.”The experimental vaccine, known as ChAdOx1 nCoV-19, is one of the front-runners in the global race to provide protection against the new coronavirus causing the COVID-19 pandemic.Hill’s team began early-stage human trials of the vaccine in April, making it one of only a handful to have reached that milestone.6:00 a.m.: Brazil passes Russia, now has second most confirmed cases globallyBrazil has now surpassed Russia with a total number of confirmed cases standing at 347, 398. That’s up 16,508 from the previous figure, according to Johns Hopkins University.Russia on Sunday reported their updated figure at a total of 344,481.Brazil now stands as having the second most confirmed cases globally, with the current number likely to rise even higher once newer figures are reported.4:51 a.m.: Michigan and Missouri announce change in reporting of COVID-19 testing dataThe state of Michigan announced that they would be changing the way they report COVID-19 testing data by separating the results of diagnostic tests and serology tests.The Michigan Department of Health and Human Services said in a statement that “the change makes the data more accurate and relevant as the state continues to expand diagnostic testing to help slow and contain the spread of COVID-19. The update to the website separates out the results of two different types of tests – serology and diagnostic. Michigan – along with some other states – has not separated data for diagnostic and serology tests. Data on serology testing – also known as antibody testing – is separated from the other testing numbers. Currently, serology testing can be used to help determine whether someone has ever had COVID-19, while traditional viral diagnostic tests determine if someone has active disease.”“Diagnostic tests are most helpful in tracking the spread of COVID-19 since they can show the number of people who currently have the COVID-19 virus. Serology tests are still being studied regarding their utility. They are currently most helpful in understanding how much a community may have been exposed to the disease. However, it is unknown if the presence of an antibody truly means someone is immune to COVID-19, and if so, for how long. Results of antibody tests should not change decisions on whether an individual should return to work, or if they should quarantine based on exposure to someone with the disease. Approximately 12 percent of Michigan’s tests overall have been serology tests; about 60 percent of those have been from the past nine days,” the statement read.Meanwhile, Missouri also announced their own changes regarding their reporting of COVID-19 cases.“The Governor calls on us as public servants to get better every day,” said Dr. Randall Williams, director of Missouri’s DHSS. “As we continue to learn more about this virus and new tests emerge, we will continue providing better data with greater clarity and transparency to help Missourians make the best decisions for their health care possible.”According to a statement released by Missouri’s DHSS, some key changes in the data will include:•A change in the percent positivity rate. The percent positivity rate was previously calculated as the number of positive COVID-19 cases divided by the total number of tests completed. The new calculation is the number of positive cases divided by the number of people tested (not the number of tests done). These changes will increase the rate of positive cases as reported through the dashboard. This is because each positive individual may have multiple tests done, increasing the size of the denominator (the number of tests) but not the numerator (number of confirmed COVID-19 cases). The previously-reported rate cannot be compared to the current rate.•The tests performed by day will include PCR tests only, which indicates only if a person has an active COVID-19 case. Numbers will have decreased from previous days’ reporting because DHSS is no longer including people who received only serology tests.•Given the marked increase in serology testing, DHSS is now reporting separate information on serology which is collected through a blood test to determine if a person has previously been infected and has formed antibodies against the virus.2:37 a.m.: Wuhan lab director calls virus leak claims ‘pure fabrication’Claims that the global coronavirus pandemic originated at the Wuhan Institute of Virology in the central Chinese city are a “pure fabrication,” the institute’s director said.Wang Yanyi said the institute did not have any knowledge before that “nor had we ever encountered, researched or kept the virus. In fact, like everyone else, we didn’t even know the virus existed. How could it have leaked from our lab when we never had it?”Wang Yanyi continued: “Many people might misunderstand that since our institute reported the RaTG-13’s genome similarity to SARS-CoV-2, we must have the RaTG-13 virus in our lab. In fact, that’s not the case. When we were sequencing the genes of this bat virus sample, we got the genome sequence of the RaTG-13 but we didn’t isolate nor obtain the live virus of RaTG-13. Thus, there is no possibility of us leaking RaTG-13.”8:58 p.m.: Minnesota’s governor allows places of worship to openMinnesota Gov. Tim Walz announced that starting Wednesday, he will allow places of worship to reopen at 25% capacity if they adhere to social distancing and other public health guidelines.Walz also announced that COVID-19 cases are still climbing and may not reach the peak until summer.The state’s health commissioner said there was an increase of 847 positive coronavirus cases Friday — the highest daily total. There have been 19,845 positive cases in the state thus far.ABC News’ Adam Kelsey and Christine Theodorou contributed to this report.Copyright © 2020, ABC Audio. All rights reserved.
FacebookTwitterLinkedInEmailTim Warner/Getty Images(MEMPHIS, Tenn.) — The Memphis Grizzlies have acquired Justin Holiday from the Chicago Bulls, the team announced late Thursday.In exchange for the 29-year-old guard, Memphis sent guard MarShon Brooks and guard/forward Wayne Selden Jr. to Chicago. The Grizzlies also gave up two future second round draft picks to the Bulls.This season with Chicago, Holiday has started in every game in which he’s appeared. He’s averaged 11.6 points, 4.4 rebounds and 2.2 assists.Copyright © 2019, ABC Radio. All rights reserved. Written by January 4, 2019 /Sports News – National Grizzlies acquire Justin Holiday from Bulls in trade Beau Lund
Gluten- and wheat-free specialist Dr Schär UK, has teamed up with Health Village – a new concept store from Lloyds Pharmacy. Its range of DS-gluten free and TRUfree products will be available in the new store, in order to offer more convenience for consumers buying gluten- and wheat-free foods. It will enable coeliacs, who get their staple food products on prescription, to purchase extra items now not available on the NHS, following the Primary Care Trust (PCT) budget cuts, such as sweet biscuits and crackers, said the firm.Health Village opened last week in London and is the first pharmacy in the UK to offer a wide range of products from DS-gluten free and TRUfree. “Recent budget cuts from various PCTs have meant that a number of gluten- and wheat-free products are no longer available on prescription,” explained Emma Herring, retail brand manager for Dr. Schär UK. “Plus, with the number of units allowed on prescription also being restricted, we’ve definitely seen an increase in retail sales of these more ‘luxury’ items, which we expect to continue over the next few years.” Products available include Dr. Schär UK’s TRUfree Rich Tea Biscuits, Digestives, Custard Creams, Bourbons, Chocolate Nobbles, Chocolate Fingers, High Fibre Crackers, Herb and Onion Crackers and Pretzels and DS-gluten free Brown Multigrain Loaf, White Sliced Loaf, White Ciabatta Rolls, Brown Ciabatta Rolls, Rice Cakes, Breadsticks and Crispbread.The manufacturer has also provided the pharmacy staff with a specialist training programme to ensure they have an extensive knowledge about the products they will be selling.Dr Schär’s other gluten- and wheat-free food brand Glutafin is already available in Lloyds Pharmacy.>>DS-gluten free revamps frozen pastry range
When walking into a venue and getting settled into a show, or relaxing during a set break with friends discussing the first set, the music playing overhead is something that can largely go unnoticed, while at other times keeping the party vibe alive and going. Phish is typically a band that has some more noticeable artists playing during these particular time periods.Whether you recognize said artist/s or not, you typically want to go home and discover some of their music. Thanks to Phish’s own Julia Mordaunt, who creates the artistic designs and layouts for their various releases, has made that task much easier with the creation of a Spotify playlist of the band’s Walk-In and Set Break music from 2009 – 2017.Artists such as Dr. Dog, The Meters, Professor Longhair, Broken Bells, The National, Cymande, Thelonious Monk, Real Estate, David Bowie, Kamasi Washington, Jimmy Smith, Al Green, and many more make up the rather enjoyable playlist. Take a listen below, all you have to do is press play.
On Monday, The Kinks’ frontman, Ray Davies, confirmed that the English rock band will be getting back together after over 20 years. The group, who first came to fame with 1964’s “You Really Got Me”. broke up in 1996, primarily due to commercial failure and increasing tension among the band’s members.In a recent interview with BBC’s Channel 4, Ray Davies announced that the group would be getting back together to record an album, and his brother Dave Davies and Mick Avory areon board. As he explained, “We’ve been talking about it because I’ve got all these songs that I wrote, then the band — not broke up, we parted company — and I think it’s kind of an appropriate time to do it.” Addressing concerns of the tension between Dave and Mick, Ray added, “The trouble is, the two remaining members, my brother Dave and Mick, never got along very well. But I’ve made that work in the studio and it’s fired me up to make them play harder, and with fire.”This announcement comes ahead of the release of Ray Davies’ latest album, Our Country: Americana Act II on Friday, which marks the second collaboration Davies has released with the Jayhawks.[H/T Billboard]
Houghton Library Manuscript Cataloger Michael Austin (left) holds the Academy Award presented to Johnny Green, Class of 1928, for his original composition The Merry Wives of Windsor Overture, a subject in MGM’s Concert Hall series. Austin recently completed a major project to catalog Houghton’s Johnny Green Collection, which consists of thousands of manuscript scores, printed scores with hand-written notes, and correspondence. For more information about the collection, visit the Houghton Library Blog. Read Full Story
Computer science, philosophy faculty ask students to consider how systems affect society Embedding ethics in computer science curriculum The science of the artificial Ethical concerns mount as AI takes bigger decision-making role in more industries Imagine a world in which AI is in your home, at work, everywhere Great promise but potential for peril Related Researchers propose a new field of study to explore how intelligent machines behave as independent agents It has taken time — some say far too long — but medicine stands on the brink of an AI revolution. In a recent article in the New England Journal of Medicine, Isaac Kohane, head of Harvard Medical School’s Department of Biomedical Informatics, and his co-authors say that AI will indeed make it possible to bring all medical knowledge to bear in service of any case. Properly designed AI also has the potential to make our health care system more efficient and less expensive, ease the paperwork burden that has more and more doctors considering new careers, fill the gaping holes in access to quality care in the world’s poorest places, and, among many other things, serve as an unblinking watchdog on the lookout for the medical errors that kill an estimated 200,000 people and cost $1.9 billion annually.“I’m convinced that the implementation of AI in medicine will be one of the things that change the way care is delivered going forward,” said David Bates, chief of internal medicine at Harvard-affiliated Brigham and Women’s Hospital, professor of medicine at Harvard Medical School and of health policy and management at the Harvard T.H. Chan School of Public Health. “It’s clear that clinicians don’t make as good decisions as they could. If they had support to make better decisions, they could do a better job.”Years after AI permeated other aspects of society, powering everything from creepily sticky online ads to financial trading systems to kids’ social media apps to our increasingly autonomous cars, the proliferation of studies showing the technology’s algorithms matching the skill of human doctors at a number of tasks signals its imminent arrival.“I think it’s an unstoppable train in a specific area of medicine — showing true expert-level performance — and that’s in image recognition,” said Kohane, who is also the Marion V. Nelson Professor of Biomedical Informatics. “Once again medicine is slow to the mark. I’m no longer irritated but bemused that my kids, in their social sphere, are using more advanced AI than I use in my practice.”But even those who see AI’s potential value recognize its potential risks. Poorly designed systems can misdiagnose. Software trained on data sets that reflect cultural biases will incorporate those blind spots. AI designed to both heal and make a buck might increase — rather than cut — costs, and programs that learn as they go can produce a raft of unintended consequences once they start interacting with unpredictable humans.“I think the potential of AI and the challenges of AI are equally big,” said Ashish Jha, former director of the Harvard Global Health Institute and now dean of Brown University’s School of Public Health. “There are some very large problems in health care and medicine, both in the U.S. and globally, where AI can be extremely helpful. But the costs of doing it wrong are every bit as important as its potential benefits. … The question is: Will we be better off?”Many believe we will, but caution that implementation has to be done thoughtfully, with recognition of not just AI’s strengths but also its weaknesses, and taking advantage of a range of viewpoints brought by experts in fields outside of medicine and computer science, including ethics and philosophy, sociology, psychology, behavioral economics, and, one day, those trained in the budding field of machine behavior, which seeks to understand the complex and evolving interaction of humans and machines that learn as they go.,“The challenge with machine behavior is that you’re not deploying an algorithm in a vacuum. You’re deploying it into an environment where people will respond to it, will adapt to it. If I design a scoring system to rank hospitals, hospitals will change,” said David Parkes, George F. Colony Professor of Computer Science, co-director of the Harvard Data Science Initiative, and one of the co-authors of a recent article in the journal Nature calling for the establishment of machine behavior as a new field. “Just as it would be challenging to understand how a new employee will do in a new work environment, it’s challenging to understand how machines will do in any kind of environment, because people will adapt to them, will change their behavior.”Machine learning on the doorstepThough excitement has been building about the latest wave of AI, the technology has been in medicine for decades in some form, Parkes said. As early as the 1970s, “expert systems” were developed that encoded knowledge in a variety of fields in order to make recommendations on appropriate actions in particular circumstances. Among them was Mycin, developed by Stanford University researchers to help doctors better diagnose and treat bacterial infections. Though Mycin was as good as human experts at this narrow chore, rule-based systems proved brittle, hard to maintain, and too costly, Parkes said.The excitement over AI these days isn’t because the concept is new. It’s owing to rapid progress in a branch called machine learning, which takes advantage of recent advances in computer processing power and in big data that have made compiling and handling massive data sets routine. Machine learning algorithms — sets of instructions for how a program operates — have become sophisticated enough that they can learn as they go, improving performance without human intervention.,“The superpower of these AI systems is that they can look at all of these large amounts of data and hopefully surface the right information or the right predictions at the right time,” said Finale Doshi-Velez, John L. Loeb Associate Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “Clinicians regularly miss various bits of information that might be relevant in the patient’s history. So that’s an example of a relatively low-hanging fruit that could potentially be very useful.”Before being used, however, the algorithm has to be trained using a known data set. In medical imaging, a field where experts say AI holds the most promise soonest, the process begins with a review of thousands of images — of potential lung cancer, for example — that have been viewed and coded by experts. Using that feedback, the algorithm analyzes an image, checks the answer, and moves on, developing its own expertise.In recent years, increasing numbers of studies show machine-learning algorithms equal and, in some cases, surpass human experts in performance. In 2016, for example, researchers at Beth Israel Deaconess Medical Center reported that an AI-powered diagnostic program correctly identified cancer in pathology slides 92 percent of the time, just shy of trained pathologists’ 96 percent. Combining the two methods led to 99.5 percent accuracy.More recently, in December 2018, researchers at Massachusetts General Hospital (MGH) and Harvard’s SEAS reported a system that was as accurate as trained radiologists at diagnosing intracranial hemorrhages, which lead to strokes. And in May 2019, researchers at Google and several academic medical centers reported an AI designed to detect lung cancer that was 94 percent accurate, beating six radiologists and recording both fewer false positives and false negatives.,One recent area where AI’s promise has remained largely unrealized is the global response to COVID-19, according to Kohane and Bates. Bates, who delivered a talk in August at the Riyad Global Digital Health Summit titled “Use of AI in Weathering the COVID Storm,” said though there were successes, much of the response has relied on traditional epidemiological and medical tools.One striking exception, he said, was the early detection of unusual pneumonia cases around a market in Wuhan, China, in late December by an AI system developed by Canada-based BlueDot. The detection, which would turn out to be SARS-CoV-2, came more than a week before the World Health Organization issued a public notice of the new virus.“We did some things with artificial intelligence in this pandemic, but there is much more that we could do,” Bates told the online audience.In comments in July at the online conference FutureMed, Kohane was more succinct: “It was a very, very unimpressive performance. … We in health care were shooting for the moon, but we actually had not gotten out of our own backyard.”The two agree that the biggest impediment to greater use of AI in formulating COVID response has been a lack of reliable, real-time data. Data collection and sharing have been slowed by older infrastructure — some U.S. reports are still faxed to public health centers, Bates said — by lags in data collection, and by privacy concerns that short-circuit data sharing.“COVID has shown us that we have a data-access problem at the national and international level that prevents us from addressing burning problems in national health emergencies,” Kohane said. A key success, Kohane said, may yet turn out to be the use of machine learning in vaccine development. We won’t likely know for some months which candidates proved most successful, but Kohane pointed out that the technology was used to screen large databases and select which viral proteins offered the greatest chance of success if blocked by a vaccine.“It will play a much more important role going forward,” Bates said, expressing confidence that the current hurdles would be overcome. “It will be a key enabler of better management in the next pandemic.”Corporations agree about that future promise and in recent years have been scrambling to join in. In February 2019, IBM Watson Health began a 10-year, $50 million partnership with Brigham and Women’s Hospital and Vanderbilt University Medical Center whose aim is to use AI on electronic health records and claims data to improve patient safety, precision medicine, and health equity. And in March 2019, Amazon awarded a $2 million AI research grant to Beth Israel in an effort to improve hospital efficiency, including patient care and clinical workflows.A force multiplier?A properly developed and deployed AI, experts say, will be akin to the cavalry riding in to help beleaguered physicians struggling with unrelenting workloads, high administrative burdens, and a tsunami of new clinical data.Robert Truog, head of the HMS Center for Bioethics, the Frances Glessner Lee Professor of Legal Medicine, and a pediatric anesthesiologist at Boston Children’s Hospital, said the defining characteristic of his last decade in practice has been a rapid increase in information. While more data about patients and their conditions might be viewed as a good thing, it’s only good if it can be usefully managed.,“Over the last 10 years of my career the volume of data has absolutely gone exponential,” Truog said. “I would have one image on a patient per day: their morning X-ray. Now, if you get an MRI, it generates literally hundreds of images, using different kinds of filters, different techniques, all of which convey slightly different variations of information. It’s just impossible to even look at all of the images.“Psychologists say that humans can handle four independent variables and when we get to five, we’re lost,” he said. “So AI is coming at the perfect time. It has the potential to rescue us from data overload.”Given the technology’s facility with medical imaging analysis, Truog, Kohane, and others say AI’s most immediate impact will be in radiology and pathology, fields where those skills are paramount. And, though some see a future with fewer radiologists and pathologists, others disagree. The best way to think about the technology’s future in medicine, they say, is not as a replacement for physicians, but rather as a force-multiplier and a technological backstop that not only eases the burden on personnel at all levels, but makes them better.“You’re not expecting this AI doctor that’s going to cure all ills but rather AI that provides support so better decisions can be made,” Doshi-Velez said. “Health is a very holistic space, and I don’t see AIs being anywhere near able to manage a patient’s health. It’s too complicated. There are too many factors, and there are too many factors that aren’t really recorded.”In a September 2019 issue of the Annals of Surgery, Ozanan Meireles, director of MGH’s Surgical Artificial Intelligence and Innovation Laboratory, and general surgery resident Daniel Hashimoto offered a view of what such a backstop might look like. They described a system that they’re training to assist surgeons during stomach surgery by having it view thousands of videos of the procedure. Their goal is to produce a system that one day could virtually peer over a surgeon’s shoulder and offer advice in real time.At the Harvard Chan School, meanwhile, a group of faculty members, including James Robins, Miguel Hernan, Sonia Hernandez-Diaz, and Andrew Beam, are harnessing machine learning to identify new interventions that can improve health outcomes.Their work, in the field of “causal inference,” seeks to identify different sources of the statistical associations that are routinely found in the observational studies common in public health. Those studies are good at identifying factors that are linked to each other but less able to identify cause and effect. Hernandez-Diaz, a professor of epidemiology and co-director of the Chan School’s pharmacoepidemiology program, said causal inference can help interpret associations and recommend interventions.For example, elevated enzyme levels in the blood can predict a heart attack, but lowering them will neither prevent nor treat the attack. A better understanding of causal relationships — and devising algorithms to sift through reams of data to find them — will let researchers obtain valid evidence that could lead to new treatments for a host of conditions.“We will make mistakes, but the momentum won’t go back the other way,” Hernandez-Diaz said of AI’s increasing presence in medicine. “We will learn from them.”Finding new interventions is one thing; designing them so health professionals can use them is another. Doshi-Velez’s work centers on “interpretable AI” and optimizing how doctors and patients can put it to work to improve health.AI’s strong suit is what Doshi-Velez describes as “large, shallow data” while doctors’ expertise is the deep sense they may have of the actual patient. Together, the two make a potentially powerful combination, but one whose promise will go unrealized if the physician ignores AI’s input because it is rendered in hard-to-use or unintelligible form.“I’m very excited about this team aspect and really thinking about the things that AI and machine-learning tools can provide an ultimate decision-maker — we’ve focused on doctors so far, but it could also be the patient — to empower them to make better decisions,” Doshi-Velez said.,While many point to AI’s potential to make the health care system work better, some say its potential to fill gaps in medical resources is also considerable. In regions far from major urban medical centers, local physicians could be able to get assistance diagnosing and treating unfamiliar conditions and have available an AI-driven consultant that allows them to offer patients a specialists’ insight as they decide whether a particular procedure — or additional expertise — is needed.Outside the developed world that capability has the potential to be transformative, according to Jha. AI-powered applications have the potential to vastly improve care in places where doctors are absent, and informal medical systems have risen to fill the need. Recent studies in India and China serve as powerful examples. In India’s Bihar state, for example, 86 percent of cases resulted in unneeded or harmful medicine being prescribed. Even in urban Delhi, 54 percent of cases resulted in unneeded or harmful medicine.“If you are sick, is it better to go to the doctor or not? In 2019, in large parts of the world, it’s a wash. It’s unclear. And that is scary,” Jha said. “So it’s a low bar. People ask, ‘Will AI be helpful?’ I say we’d really have to screw up AI for it not to be helpful. Net-net, the opportunity for improvement over the status quo is massive.”A double-edged sword?Though the promise is great, the road ahead isn’t necessarily smooth. Even AI’s most ardent supporters acknowledge that the likely bumps and potholes, both seen and unseen, should be taken seriously.One challenge is ensuring that high-quality data is used to train AI. If it is biased or otherwise flawed, that will be reflected in the performance. A second challenge is ensuring that the prejudices rife in society aren’t reflected in the algorithms, added by programmers unaware of those they may unconsciously hold.That potential was a central point in a 2016 Wisconsin legal case, when an AI-driven, risk-assessment system for criminal recidivism was used in sentencing a man to six years in prison. The judge remarked that the “risk-assessment tools that have been utilized suggest that you’re extremely high risk to reoffend.”The defendant challenged the sentence, arguing that the AI’s proprietary software — which he couldn’t examine — may have violated his right to be sentenced based on accurate information. The sentence was upheld by the state supreme court, but that case, and the spread of similar systems to assess pretrial risk, has generated national debate over the potential for injustices due to our increasing reliance on systems that have power over freedom or, in the health care arena, life and death, and that may be unfairly tilted or outright wrong.“We have to recognize that getting diversity in the training of these algorithms is going to be incredibly important, otherwise we will be in some sense pouring concrete over whatever current distortions exist in practice, such as those due to socioeconomic status, ethnicity, and so on,” Kohane said.Also highlighted by the case is the “black box” problem. Since the algorithms are designed to learn and improve their performance over time, sometimes even their designers can’t be sure how they arrive at a recommendation or diagnosis, a feature that leaves some uncomfortable.,“If you start applying it, and it’s wrong, and we have no ability to see that it’s wrong and to fix it, you can cause more harm than good,” Jha said. “The more confident we get in technology, the more important it is to understand when humans can override these things. I think the Boeing 737 Max example is a classic example. The system said the plane is going up, and the pilots saw it was going down but couldn’t override it.”Jha said a similar scenario could play out in the developing world should, for example, a community health worker see something that makes him or her disagree with a recommendation made by a big-name company’s AI-driven app. In such a situation, being able to understand how the app’s decision was made and how to override it is essential.“If you see a frontline community health worker in India disagree with a tool developed by a big company in Silicon Valley, Silicon Valley is going to win,” Jha said. “And that’s potentially a dangerous thing.”Researchers at SEAS and MGH’s Radiology Laboratory of Medical Imaging and Computation are at work on the two problems. The AI-based diagnostic system to detect intracranial hemorrhages unveiled in December 2019 was designed to be trained on hundreds, rather than thousands, of CT scans. The more manageable number makes it easier to ensure the data is of high quality, according to Hyunkwang Lee, a SEAS doctoral student who worked on the project with colleagues including Sehyo Yune, a former postdoctoral research fellow at MGH Radiology and co-first author of a paper on the work, and Synho Do, senior author, HMS assistant professor of radiology, and director of the lab.“We ensured the data set is of high quality, enabling the AI system to achieve a performance similar to that of radiologists,” Lee said.Second, Lee and colleagues figured out a way to provide a window into an AI’s decision-making, cracking open the black box. The system was designed to show a set of reference images most similar to the CT scan it analyzed, allowing a human doctor to review and check the reasoning.Jonathan Zittrain, Harvard’s George Bemis Professor of Law and director of the Berkman Klein Center for Internet and Society, said that, done wrong, AI in health care could be analogous to the cancer-causing asbestos that was used for decades in buildings across the U.S., with widespread harmful effects not immediately apparent. Zittrain pointed out that image analysis software, while potentially useful in medicine, is also easily fooled. By changing a few pixels of an image of a cat — still clearly a cat to human eyes — MIT students prompted Google image software to identify it, with 100 percent certainty, as guacamole. Further, a well-known study by researchers at MIT and Stanford showed that three commercial facial-recognition programs had both gender and skin-type biases.Ezekiel Emanuel, a professor of medical ethics and health policy at the University of Pennsylvania’s Perelman School of Medicine and author of a recent Viewpoint article in the Journal of the American Medical Association, argued that those anticipating an AI-driven health care transformation are likely to be disappointed. Though he acknowledged that AI will likely be a useful tool, he said it won’t address the biggest problem: human behavior. Though they know better, people fail to exercise and eat right, and continue to smoke and drink too much. Behavior issues also apply to those working within the health care system, where mistakes are routine.“We need fundamental behavior change on the part of these people. That’s why everyone is frustrated: Behavior change is hard,” Emanuel said.Susan Murphy, professor of statistics and of computer science, agrees and is trying to do something about it. She’s focusing her efforts on AI-driven mobile apps with the aim of reinforcing healthy behaviors for people who are recovering from addiction or dealing with weight issues, diabetes, smoking, or high blood pressure, conditions for which the personal challenge persists day by day, hour by hour.The sensors included in ordinary smartphones, augmented by data from personal fitness devices such as the ubiquitous Fitbit, have the potential to give a well-designed algorithm ample information to take on the role of a health care angel on your shoulder.The tricky part, Murphy said, is to truly personalize the reminders. A big part of that, she said, is understanding how and when to nudge — not during a meeting, for example, or when you’re driving a car, or even when you’re already exercising, so as to best support adopting healthy behaviors.“How can we provide support for you in a way that doesn’t bother you so much that you’re not open to help in the future?” Murphy said. “What our algorithms do is they watch how responsive you are to a suggestion. If there’s a reduction in responsivity, they back off and come back later.”The apps can use sensors on your smartphone to figure out what’s going on around you. An app may know you’re in a meeting from your calendar, or talking more informally from ambient noise its microphone detects. It can tell from the phone’s GPS how far you are from a gym or an AA meeting or whether you are driving and so should be left alone.Trickier still, Murphy said, is how to handle moments when the AI knows more about you than you do. Heart rate sensors and a phone’s microphone might tell an AI that you’re stressed out when your goal is to live more calmly. You, however, are focused on an argument you’re having, not its physiological effects and your long-term goals. Does the app send a nudge, given that it’s equally possible that you would take a calming breath or angrily toss your phone across the room?Working out such details is difficult, albeit key, Murphy said, in order to design algorithms that are truly helpful, that know you well, but are only as intrusive as is welcome, and that, in the end, help you achieve your goals.Enlisting alliesFor AI to achieve its promise in health care, algorithms and their designers have to understand the potential pitfalls. To avoid them, Kohane said it’s critical that AIs are tested under real-world circumstances before wide release.Similarly, Jha said it’s important that such systems aren’t just released and forgotten. They should be reevaluated periodically to ensure they’re functioning as expected, which would allow for faulty AIs to be fixed or halted altogether.Several experts said that drawing from other disciplines — in particular ethics and philosophy — may also help. Related AI+Art project prompts us to envision how the technology will change our lives The Daily Gazette Sign up for daily emails to get the latest Harvard news. Also in the series Trailblazing initiative marries ethics, tech Third in a series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the coming age of artificial intelligence and machine learning.The news is bad: “I’m sorry, but you have cancer.”Those unwelcome words sink in for a few minutes, and then your doctor begins describing recent advances in artificial intelligence, advances that let her compare your case to the cases of every other patient who’s ever had the same kind of cancer. She says she’s found the most effective treatment, one best suited for the specific genetic subtype of the disease in someone with your genetic background — truly personalized medicine.And the prognosis is good. Programs like Embedded EthiCS at SEAS and the Harvard Philosophy Department, which provides ethics training to the University’s computer science students, seek to provide those who will write tomorrow’s algorithms with an ethical and philosophical foundation that will help them recognize bias — in society and themselves — and teach them how to avoid it in their work.Disciplines dealing with human behavior — sociology, psychology, behavioral economics — not to mention experts on policy, government regulation, and computer security, may also offer important insights.“The place we’re likely to fall down is the way in which recommendations are delivered,” Bates said. “If they’re not delivered in a robust way, providers will ignore them. It’s very important to work with human factor specialists and systems engineers about the way that suggestions are made to patients.”Bringing these fields together to better understand how AIs work once they’re “in the wild” is the mission of what Parkes sees as a new discipline of machine behavior. Computer scientists and health care experts should seek lessons from sociologists, psychologists, and cognitive behaviorists in answering questions about whether an AI-driven system is working as planned, he said.“How useful was it that the AI system proposed that this medical expert should talk to this other medical expert?” Parkes said. “Was that intervention followed? Was it a productive conversation? Would they have talked anyway? Is there any way to tell?”Next: A Harvard project asks people to envision how technology will change their lives going forward. The algorithm will see you now Harvard scientists help drive new age of machines, aiming for transformative impact in medicine, on Main Street, and beyond Onward and upward, robots Symposium examines promise, hype of artificial intelligence in health care Harvard initiative seen as a national model
Indian novelist Amitav Ghosh will deliver the 23rd annual Hesburgh Lecture in Ethics and Public Policy, the University announced in a press release Monday.The Hesburgh Lecture, which the Kroc Institute for International Peace Studies established in honor of University President Emeritus Fr. Theodore Hesburgh, is devoted to examining “an issue related to ethics and public policy in the context of peace and justice,” according to the press release.Ghosh — who has received the Arthur C. Clarke award, the Crossword Book Prize and a Man Booker Prize shortlisting — will explore the topic of climate change and address the current discussion of the topic, which “has skewed the discourse in certain directions with predominantly economic characterizations of problems and technological solutions,” the press release said.“The Kroc Institute is delighted to partner with the Department of English and the Liu Institute in welcoming Amitav Ghosh to deliver this important annual lecture,” Ruth Abbey, interim director of the Kroc Institute, said in the release.Ghosh will deliver his lecture at 4 p.m. Tuesday in the Jordan Auditorium of the Mendoza College of Business.Tags: Climate change, Hesburgh Lecture, Kroc Institute for International Peace Studies
Pixabay Stock Image.JAMESTOWN – A local health care provider has received a $1 million federal grant.Congressman Tom Reed says The Chautauqua Center will see the funding from the Capital Assistance for Disaster Response and Recovery Efforts program.The program helps health centers impacted by various natural disasters and crises with aid to help the response to and recovery from emergencies.Reed says the package also increases the capacity and capability to respond to these types of situations in the future by supporting access to high-quality primary care services for underserved and vulnerable populations. “We care about making sure our regional health centers have fair access to the funds and resources they need to respond quickly to emergencies now and in the future,” said Reed in a statement. “We were proud to support this funding and will continue the fight to bring these important resources to our district.”Officials with The Chautauqua Center say the funding will be used to help build a new facility in Dunkirk.The group hopes to have the new location open and serving the community by July of next year. Share:Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to email this to a friend (Opens in new window)
Arán takes charge of Board of Bar Examiners Associate Editor Cuban-born Fernando Arán of Coral Gables, the first Hispanic to chair the Florida Board of Bar Examiners, pledges to help increase minority access to the legal profession. C. Jeffrey McInnis, of Ft. Walton Beach, was recently elected vice chair of the board and will become chair in October 2001. The Florida Supreme Court also has appointed Dr. Larry C. Carey, of Tampa; J. Bert Grandoff, of Tampa; Gloretta Hankins Hall, of Palm City; and Paul J. Schwiep, of Miami, to the board. Having served as president of both the Cuban American Bar Association and the Hispanic National Bar Association, Arán plans to visit organized bar associations and talk about the importance of increasing opportunities for minorities. His ideas include offering stipends for bar review courses and scholarships so minorities don’t have to work while studying for the bar exam. “I want to focus on the fact that if they really want to increase minority participation in the profession that they, as voluntary bars, have a role to play to encourage minorities to apply, beyond scholarships and mentoring,” Arán said. In addition, he said he will expand the Board of Bar Examiners’ outreach to all law school students and personally speak to student minority bar associations with hints toward increasing their chances of passing the bar. He said he wants to make sure law students are taking the proper courses that will help them pass the bar exam, as well as giving them tips on going through the application and background investigation process. “Nowadays, students who may want to go into entertainment law show up in the bar review course and do not realize that family law is one of the subjects that will be on the exam,” he said. Last year’s move to raise the bar exam passage rate should not be at odds with his top goal of bringing more minorities into the legal profession, he said. “I supported raising the passage rate,” Arán said. “I went through the process to evaluate whether we should raise the pass-fail line. I participated in the exam evaluation and grading conference we had in Orlando. I actually sat down with five other members of our profession, including professors and judges and lay folks. I realized that a lot of exam questions I thought I failed or barely passed had a score that would have passed.” In addition, he said, “Our expert, the person we hired to do the study, was emphatic in demonstrating by his data, based on other jurisdictions, that raising the exam pass-fail line by five, six or seven points would not have disparate impact on minorities. Yes, less will be passing, but statistically as many non-Hispanic whites will be failing.” To make sure that raising the pass-fail line does not work against minorities, Arán said, he also voted in favor of collecting demographic information so Florida will be able to accurately track whether there is any impact on minorities taking the bar exam. “In an effort to make sure raising the pass-fail line doesn’t work against minorities is why I voted to keep the statistics on minorities. Now we can evaluate what impact we are having,” Arán said. When the next bar exam is offered in February, racial information will be gathered as part of the application for the first time, he said. “Prior to this, we had to improvise and had to send out questionnaires to obtain the information or obtain it from fingerprint cards.” Vowing to pick up where former Chair Randy Hanna of Tallahassee left off, Arán further vows to demystify the Bar admissions process. He describes the role of the Board of Bar Examiners as the bridge between law school students and the practicing members of The Florida Bar, with a duty to protect the public. For the future, Arán pledges: “We must educate those affected by the Bar admission process to earn their trust and to instill confidence.” Arán said he is also sensitive to providing timely access to the legal profession. Due to improved technology and efforts of the Board of Bar Examiners, the length of an average background investigation has been reduced to four and a half months, a decrease of six weeks since 1995. He said he hopes to continue this downward trend in completing more bar applications in less time. Describing himself as a “mentor to today’s youth and tomorrow’s future,” Arán is an Eagle Scout, a troop leader and vice president for operations for the South Florida Council of Boy Scouts, as well as a father who makes sure he divides his time fairly and camps out with his Indian Princess daughter. Born in Havana, he immigrated to this country in 1962 when he was four years old, with his parents, who were both lawyers. “They did not revalidate their bar exam here, though,” he said with a chuckle. Arán received his undergraduate degree from the University of Miami in 1978 and his Juris Doctor from Georgetown University Law Center in 1981. He practices mainly in the fields of admiralty and maritime law and in construction and commercial litigation. He serves as a member of the Bar’s Standing Committee on Pro Bono Legal Services and was chair of the Florida Bar Grievance Committee “J” Division for Dade County. C. Jeffrey McInnis C. Jeffrey McInnis, a shareholder in the firm of Anchors, Foster, McInnis & Keefe, of Ft. Walton Beach, was elected as vice chair by his fellow board members. His will become chair after October 31, 2001, and serve through October 31, 2002. McInnis attended Okaloosa-Walton Junior College, where he received his Associate of Arts degree; Florida State University, where he received his under- graduate degree; and Stetson University College of Law, where he received his Juris Doctor. Admitted to The Florida Bar in 1985, he served as a member of the board of directors for the Okaloosa-Walton Bar Association, as president of the Florida School Board Attorneys Association and as a member and chair of the District Board of Trustees for the Okaloosa-Walton Community College. He is a former board member and past president of the Niceville-Valparaiso-Bay Area Chamber of Commerce and is a member of the Florida Municipal Attorneys Association and board of trustees of the Fort Walton Beach Medical Center. He is a lifetime member of the Sigma Chi fraternity. He lives in Ft. Walton Beach with his wife, Katherine, and their three children. Larry C. Carey Larry C. Carey, a professor of surgery at the University of South Florida College of Medicine, has been appointed to the Florida Board of Bar Examiners by the Supreme Court to succeed retiring public member I. Martin Ford of Orlando. His term runs through October 31, 2003. Dr. Carey was born in Coal Grove, Ohio, and attended Ohio State University, where he received both his undergraduate degree and his doctorate of medicine. He holds active medical licenses in Florida, Ohio and Pennsylvania, and he is still in active practice. He was chair of the Department of Surgery at the University of South Florida College of Medicine from 1990 to 1999. He is a member of numerous professional organizations, including: the American Board of Surgery, the American College of Surgeons, The American Surgical Association, the Southern Surgical Association, The American Association for the Surgery of Trauma, the American Gastroenterological Association and the Gamma and National Chapters of Alpha Omega Alpha. He lives in Tampa with his wife, Christina, and their daughter, Elizabeth. J. Bert Grandoff J. Bert Grandoff, a member in the Tampa firm of Allen, Dell, Frank & Trinkle, has been appointed to the board to succeed retiring member Franklin Harrison of Panama City. His term will extend through October 31, 2005. Born in Tampa, where he still lives, Grandoff attended the University of Florida, where he received his undergraduate degree and Stetson University College of Law, where he received his Juris Doctor. Admitted to The Florida Bar in 1965, he is a founding fellow of the American College of Construction Lawyers, is a past member of the Bar Board of Governors and chair of a Bar Grievance Committee. He is past chair of the Hillsborough County Aviation Authority, and he has served as Hillsborough county attorney. Gloretta Hankins Hall Gloretta Hankins Hall, a partner of the Stuart firm of Gary, Williams, Parenti, et al., has been appointed to membership on the board to succeed retiring member Karen Coolman Amlong of Ft. Lauderdale. Her term of office will extend through October 31, 2005. Hall was born in Louisville, Georgia. She attended Florida Atlantic University, where she received her baccalaureate degree in nursing, and the University of Miami School of Law, where she received her Juris Doctor. Admitted to The Florida Bar in 1991, she is a member of the Palm Beach County Bar Association, the Martin County Bar Association and the Florida Academy of Trial Lawyers. She lives in Palm City. December 15, 2000 Jan Pudlow Associate Editor Regular News Arán takes charge of Board of Bar Examiners Paul J. Schwiep Paul J. Schwiep, a shareholder in the firm of Aragon, Burlington, Weil & Crockett, of Miami, has been appointed to the board to succeed retiring member Irwin Block of Miami. His term of office will extend through October 31, 2005. Born in New York City, Schwiep attended Abilene Christian University, where he received his under- graduate degree, and the University of Oregon School of Law, where he received his Juris Doctor. Admitted to The Florida Bar in 1989, he is a member of the Federal Bar Association and the American Bar Association. Schwiep lives in Miami.