Banks and investment management groups are experimenting with quantum to reduce risk and gain accurate knowledge about portfolios faster than ever. Here’s how those initial projects and use of this potentially game-changing technology are evolving.
If there’s an industry steeped in computations, it’s the financial services sector. Optimization problems, for which a whole chorus of variables must be fine-tuned and modulated, routinely plague financial firms, especially when it comes to highly engineered financial products such as those developed through quantitative analysis.
That need for complex mathematical modeling at scale makes the finance industry a perfect candidate for the promise of quantum computing, which makes (extremely) quick work of computations, including complex ones, delivering results in minutes or hours instead of weeks and months.
But beyond speed, quantum’s ability to deliver accurate knowledge in reasonable time frames is what makes it especially valuable, says Benno Broer, chief commercial officer at PASQAL, a full-stack quantum services company. “We want to know how to price something and do it more accurately and we want that answer in the next hour. With the classical computer, that would take two weeks and the trade [would be] gone,” he says.
Doing “computation better” is one of the reasons why a joint team of engineers and financial analysts at Ally Financial turned to quantum. Their focus was traditional Exchange-Traded Funds (ETFs), which comprise hundreds of thousands of equities that make up a certain return over a period of time. The premise is that even if a few individual components underperform, the rest buoy the net result, delivering a fairly predictable return over a fixed amount of time.
But managing and manipulating so many component parts of an ETF is painful: “There’s a lot of variability in terms of stock performance and there’s a lot of buying and selling and related transaction fees,” says Sathish Muthukrishnan, chief information, data, and digital officer at Ally.
Muthukrishnan and the Ally team explored whether they could deliver similar returns on an ETF that had fewer component parts. To do so, they explored the optimization problem of “cardinality constraints” and developed a hybrid quantum-classical approach to financial index tracking portfolios that maximizes returns and minimizes risk. Their work earned them a 2023 US CIO 100 Award for IT innovation and leadership.
The Ally team used a method called quantum annealing that helped them settle on a choice few equities. “You’re able to select a smaller number of stocks with predictive return and lower operational and transaction costs, which ultimately means that you can reduce the variability and more accurately predict returns,” Muthukrishnan says.
Because of quantum’s abilities the Ally team could create 50 separate scenarios and back-test the models. Such rigor also highlights flaws in the models used for traditional computing and helps industries develop more robust foundations for data-related research, Muthukrishnan says, a happy byproduct.
Ally’s work with ETFs is just one example signaling what could be a sea change for the industry, thanks to quantum’s computing power.
Cirdan Group, a European banking group that offers investment solutions, is another financial organization putting quantum computing to work, teaming up with quantum-as-a-service company Terra Quantum on a computational challenge for its investment solutions. The specific focus for the partnership was exotic derivatives, which are uniquely challenging because they are represented by mathematical functions with no closed-end formulas.
“There’s no [simple] equation that says X plus Y equals the value of the derivative,” says Antonio de Negri, CEO at the Cirdan Group. As a result, “we have to calculate our derivatives by running thousands and thousands of Monte Carlo simulations,” de Negri says. Traditional high-performance computing (HPC) finds the process to be backbreaking and time-consuming. But financial institutions like Cirdan have been slogging along, conducting these tedious calculations anyway because they help understand asset risk and manage it more efficiently.
Cirdan had been throwing the best computing muscle at the problem but calculations were time-consuming and expensive. Terra conducted the same optimization calculations with quantum and delivered a 75% reduction in computing time — from 10 minutes to 2 minutes — with the same accuracy. Future iterations are expected to deliver even greater economies, de Negri says.
Eight minutes might not seem like a lot, but it can translate to large gains. “Given the size of the portfolio or the size of the bets that [financial institutions] are making, if you can do a computation in a slightly better way, 0.1% better or faster than you could, that already means a significant upside,” says Broer, who remembers when he used to work with Excel-based models to evaluate portfolio risk.
“We started our models, pressed run, and had to wait until the morning and hope they hadn’t crashed overnight,” Broer says. It was Monday-morning quarterbacking at its best because you could only tell if you had been over the risk limit for past transactions.
“It’s quite easy to run into the limits of classical computing if you have abundant data, many different assets that you’re trading, or many different clients you’re providing a loan to,” Broer says.
Optimization problems like the ones that Ally and Cirdan have tackled, are well suited for quantum computing because classical computing lacks the capacity to deliver meaningful results in a reasonable time frame. But quantum also holds the promise to make machine learning more efficient as well, says Vishal Shete, managing director UK and head of commercialization at Terra Quantum AG.
Because qubits, the building blocks of quantum, “can learn with much less and noisier data, they’re very efficient at learning,” Shete says. This means quantum can take on machine learning challenges with fewer constraints than traditional HPC demands.
Nilesh Vaidya, executive vice president at research and consulting firm Capgemini, agrees about the value of quantum for machine learning. “Applying machine learning techniques using quantum computing capability prepares the models better and faster,” Vaidya says. “Today, it takes a while to create and deploy models and visualize the outcomes, but with quantum some parts of it can be greatly accelerated.”
In addition to the technical feasibility qualifications, Shete advises enterprises to cherry-pick projects where even a little improvement can lead to good business value. Stakeholder interest is also key. “You could have all the other factors fitting in but if the business unit lead that you’re working with is either resistant or unwilling to change, that’s a stumbling block,” Shete says.
“If you understand the strengths and weaknesses of quantum, then in each field you can find a good niche where you can add large value,” Broer says. “But if you assume it can add value to everything then you’ll be very disappointed. It’s like a hammer looking for a nail, it’s going to be a lot of work to find that nail but once you have it, you can get started.”
How exactly to get started? While a few financial institutions are building their quantum teams from the ground up, many are choosing to partner with experts in the field.
Markus Pflitsch, Terra Quantum’s CEO and founder, argues that “it’s just not feasible for banks and other industries to build quantum capabilities in-house given the dearth of talent.” In addition to providing access to “best-in-breed” quantum hardware, firms such as Terra Quantum can run quantum software on in-house simulators based on classical HPC components, which is how Cirdan addressed its exotic derivative problem. When quantum computers move beyond the Noisy Intermediate Scale Quantum (NISQ) devices they occupy today, the Terra Quantum software can also translate to those platforms.
Shete points out that quantum specialists can also cross-pollinate solutions from different industries. For example, “the simulation work we’re doing in options pricing has got lots of similarities with work that can be done in molecular simulation in chemical companies,” he says. A quantum-only company might seed ideas borrowed from one sector across the board, Shete suggests.
One machine learning challenge Terra Quantum is currently working on involves understanding customers with time-series prediction models: “It’s about predicting customer behavior, really understanding how customers will react, what is the best grouping of different customers, what the correlations are and how they should be put together, and hence what are the best products customers should be nudged toward,” Shete says.
In markets, time-series predictions help understand how markets will behave and evaluate correlation between different types of assets. And in risk management, quantum can be deployed “for Monte Carlo simulations or understanding anti-money laundering or compliance issues that might be happening within your bank,” Shete says.
For its part, Ally expects to evaluate more quantum-related projects in the future, including credit loss modeling, where one can predict what percentage of loans granted to customers might end up as losses. The proof-of-concept projects Ally has conducted so far are its trial run for when quantum is ready for prime time.
“It’s important for us to test the technology and be ready,” Muthukrishnan says. “It’s like constantly working out and doing your sprints so when the real race happens, you’re ready to go. You can’t sit around and wait for things to happen — it’s all about consistency, preparation, and then being able to rise to the occasion when the time is right.”
In Neal Stephenson’s 1995 science fiction novel, The Diamond Age, readers meet Nell, a young girl who comes into possession of a highly advanced book, The Young Lady’s Illustrated Primer. The book is not the usual static collection of texts and images but a deeply immersive tool that can converse with the reader, answer questions, and personalize its content, all in service of educating and motivating a young girl to be a strong, independent individual.
Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fiction—until now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.
Sundar Pichai, Google’s CEO, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past.” Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Bill Gates has said that “this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.”
Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
What Is Generative AI?
Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider “intelligent” if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.
“Foundation models” in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.
Large Language Models (LLMs) are foundation models that have been trained on a vast amount of text data. For example, the training data for OpenAI’s GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAI’s GPT-3 models underwent training on a staggering 300 billion “tokens” or word pieces, using more than 175 billion parameters to shape the model’s behavior—nearly 100 times more data than the company’s GPT-2 model had.
By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what courses are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.
LLMs include OpenAI’s GPT-4, Google’s PaLM, and Meta’s LLaMA. These LLMs serve as “foundations” for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Google’s Pathways Language Model 2 (PaLM 2) as its foundation.
Some of the best-known applications are:
ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.
ChatGPT 4.0. The existing version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that supply it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.
Microsoft Bing Chat. An iteration of Microsoft’s Bing search engine that is enhanced with OpenAI’s ChatGPT technology. It can browse websites and offers source citations with its results.
Google Bard. Google’s AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bard’s replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.
Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 words—about the length of The Great Gatsby—and generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of “constitution” for AI systems, with the aim of making them more helpful, honest, and harmless.
These AI systems have been improving at a remarkable pace, including in how well they perform on assessments of human knowledge. OpenAI’s GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these feats especially impressive is that OpenAI did not specifically train the system to take these exams; the AI was able to come up with the correct answers on its own. Similarly, Google’s medical AI model substantially improved its performance on a U.S. Medical Licensing Examination practice test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.
These two examples prompt one to ask: if AI continues to Boost so rapidly, what will these systems be able to achieve in the next few years? What’s more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Google’s AI model, physicians preferred the AI’s long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbot’s responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when “empathetic” AI is used in education?
Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems “exhibit more general intelligence than previous AI models” and are coming “strikingly close to human-level performance.” While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.
Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely false—an anomaly known as “hallucination.” The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.
The Importance of Well-Designed Prompts
Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).
One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesn’t create anything new but efficiently retrieves what’s already there.
Generative AI is more akin to a competent intern. You supply a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results aren’t pre-made or stored somewhere—they’re produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you supply it, the better the result will be. What’s more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.
One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to “be.” Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then supply it a task.
Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, let’s say a teacher wants to create an adaptive tutoring program—for any subject, any grade, in any language—that customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.
Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isn’t perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students’ interests.
Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.
However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AI’s output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. It’s through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.
Uses of AI in Education
In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that “constituents believe that action is required now in order to get ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together.” People expressed anxiety about “future potential risks” with AI but also felt that “AI may enable achieving educational priorities in better ways, at scale, and with lower costs.”
AI could serve—or is already serving—in several teaching-and-learning roles:
Instructional assistants. AI’s ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.
Teaching assistants. AI might tackle some of the administrative tasks that keep teachers from investing more time with their peers or students. Early uses include automated routine tasks such as drafting lesson plans, creating differentiated materials, designing worksheets, developing quizzes, and exploring ways of explaining complicated academic materials. AI can also provide educators with recommendations to meet student needs and help teachers reflect, plan, and Boost their practice.
Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a child’s interests.
Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.
Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say they’ve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. It’s likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.
Google’s Project Tailwind is experimenting with an AI notebook that can analyze student notes and then develop study questions or provide tutoring support through a chat interface. These features could soon be available on Google Classroom, potentially reaching over half of all U.S. classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachers—differentiating content, drafting lesson plans, providing student feedback, and serving as an AI assistant to streamline workflow among different apps and tools.
Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies’ products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.
Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, “We should not only think about how technology can assist teachers and learners in improving what they’re doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.”
Challenges and Risks
Along with these potential benefits come some difficult challenges and risks the education community must navigate:
Student cheating. Students might use AI to solve homework problems or take quizzes. AI-generated essays threaten to undermine learning as well as the college-entrance process. Aside from the ethical issues involved in such cheating, students who use AI to do their work for them may not be learning the content and skills they need.
Bias in AI algorithms. AI systems learn from the data they are trained on. If this data contains biases, those biases can be learned and perpetuated by the AI system. For example, if the data include student-performance information that’s biased toward one ethnicity, gender, or socioeconomic segment, the AI system could learn to favor students from that group. Less cited but still important are potential biases around political ideology and possibly even pedagogical philosophy that may generate responses not aligned to a community’s values.
Privacy concerns. When students or educators interact with generative-AI tools, their conversations and personal information might be stored and analyzed, posing a risk to their privacy. With public AI systems, educators should refrain from inputting or exposing sensitive details about themselves, their colleagues, or their students, including but not limited to private communications, personally identifiable information, health records, academic performance, emotional well-being, and financial information.
Decreased social connection. There is a risk that more time spent using AI systems will come at the cost of less student interaction with both educators and classmates. Children may also begin turning to these conversational AI systems in place of their friends. As a result, AI could intensify and worsen the public health crisis of loneliness, isolation, and lack of connection identified by the U.S. Surgeon General.
Overreliance on technology. Both teachers and students face the risk of becoming overly reliant on AI-driven technology. For students, this could stifle learning, especially the development of critical thinking. This challenge extends to educators as well. While AI can expedite lesson-plan generation, speed does not equate to quality. Teachers may be tempted to accept the initial AI-generated content rather than devote time to reviewing and refining it for optimal educational value.
Equity issues. Not all students have equal access to computer devices and the Internet. That imbalance could accelerate a widening of the achievement gap between students from different socioeconomic backgrounds.
Many of these risks are not new or unique to AI. Schools banned calculators and cellphones when these devices were first introduced, largely over concerns related to cheating. Privacy concerns around educational technology have led lawmakers to introduce hundreds of bills in state legislatures, and there are growing tensions between new technologies and existing federal privacy laws. The concerns over bias are understandable, but similar scrutiny is also warranted for existing content and materials that rarely, if ever, undergo review for racial or political bias.
In light of these challenges, the Department of Education has stressed the importance of keeping “humans in the loop” when using AI, particularly when the output might be used to inform a decision. As the department encouraged in its 2023 report, teachers, learners, and others need to retain their agency. AI cannot “replace a teacher, a guardian, or an education leader as the custodian of their students’ learning,” the report stressed.
Policy Challenges with AI
Policymakers are grappling with several questions related to AI as they seek to strike a balance between supporting innovation and protecting the public interest (see sidebar). The speed of innovation in AI is outpacing many policymakers’ understanding, let alone their ability to develop a consensus on the best ways to minimize the potential harms from AI while maximizing the benefits. The Department of Education’s 2023 report describes the risks and opportunities posed by AI, but its recommendations amount to guidance at best. The White House released a Blueprint for an AI Bill of Rights, but it, too, is more an aspirational statement than a governing document. Congress is drafting legislation related to AI, which will help generate needed debate, but the path to the president’s desk for signature is murky at best.
It is up to policymakers to establish clearer rules of the road and create a framework that provides consumer protections, builds public trust in AI systems, and establishes the regulatory certainty companies need for their product road maps. Considering the potential for AI to affect our economy, national security, and broader society, there is no time to waste.
Why AI Is Different
It is wise to be skeptical of new technologies that claim to revolutionize learning. In the past, prognosticators have promised that television, the computer, and the Internet, in turn, would transform education. Unfortunately, the heralded revolutions fell short of expectations.
There are some early signs, though, that this technological wave might be different in the benefits it brings to students, teachers, and parents. Previous technologies democratized access to content and resources, but AI is democratizing a kind of machine intelligence that can be used to perform a myriad of tasks. Moreover, these capabilities are open and affordable—nearly anyone with an Internet connection and a phone now has access to an intelligent assistant.
Generative AI models keep getting more powerful and are improving rapidly. The capabilities of these systems months or years from now will far exceed their current capacity. Their capabilities are also expanding through integration with other expert systems. Take math, for example. GPT-3.5 had some difficulties with certain basic mathematical concepts, but GPT-4 made significant improvement. Now, the incorporation of the Wolfram plug-in has nearly erased the remaining limitations.
It’s reasonable to anticipate that these systems will become more potent, more accessible, and more affordable in the years ahead. The question, then, is how to use these emerging capabilities responsibly to Boost teaching and learning.
The paradox of AI may lie in its potential to enhance the human, interpersonal element in education. Aaron Levie, CEO of Box, a Cloud-based content-management company, believes that AI will ultimately help us attend more quickly to those important tasks “that only a human can do.” Frederick Hess, director of education policy studies at the American Enterprise Institute, similarly asserts that “successful schools are inevitably the product of the relationships between adults and students. When technology ignores that, it’s bound to disappoint. But when it’s designed to offer more coaching, free up time for meaningful teacher-student interaction, or offer students more personalized feedback, technology can make a significant, positive difference.”
Technology does not revolutionize education; humans do. It is humans who create the systems and institutions that educate children, and it is the leaders of those systems who decide which tools to use and how to use them. Until those institutions modernize to accommodate the new possibilities of these technologies, we should expect incremental improvements at best. As Joel Rose, CEO of New Classrooms Innovation Partners, noted, “The most urgent need is for new and existing organizations to redesign the student experience in ways that take full advantage of AI’s capabilities.”
While past technologies have not lived up to hyped expectations, AI is not merely a continuation of the past; it is a leap into a new era of machine intelligence that we are only beginning to grasp. While the immediate implementation of these systems is imperfect, the swift pace of improvement holds promising prospects. The responsibility rests with human intervention—with educators, policymakers, and parents to incorporate this technology thoughtfully in a manner that optimally benefits teachers and learners. Our collective ambition should not focus solely or primarily on averting potential risks but rather on articulating a vision of the role AI should play in teaching and learning—a game plan that leverages the best of these technologies while preserving the best of human relationships.
Policy Matters
Officials and lawmakers must grapple with several questions related to AI to protect students and consumers and establish the rules of the road for companies. Key issues include:
Risk management framework: What is the optimal framework for assessing and managing AI risks? What specific requirements should be instituted for higher-risk applications? In education, for example, there is a difference between an AI system that generates a lesson sample and an AI system grading a test that will determine a student’s admission to a school or program. There is growing support for using the AI Risk Management Framework from the U.S. Commerce Department’s National Institute of Standards and Technology as a starting point for building trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.
Licensing and certification: Should the United States require licensing and certification for AI models, systems, and applications? If so, what role could third-party audits and certifications play in assessing the safety and reliability of different AI systems? Schools and companies need to begin thinking about responsible AI practices to prepare for potential certification systems in the future.
Centralized vs. decentralized AI governance: Is it more effective to establish a central AI authority or agency, or would it be preferable to allow individual sectors to manage their own AI-related issues? For example, regulating AI in autonomous vehicles is different from regulating AI in drug discovery or intelligent tutoring systems. Overly broad, one-size-fits-all frameworks and mandates may not work and could slow innovation in these sectors. In addition, it is not clear that many agencies have the authority or expertise to regulate AI systems in diverse sectors.
Privacy and content moderation: Many of the new AI systems pose significant new privacy questions and challenges. How should existing privacy and content-moderation frameworks, such as the Family Educational Rights and Privacy Act (FERPA), be adapted for AI, and which new policies or frameworks might be necessary to address unique challenges posed by AI?
Transparency and disclosure: What degree of transparency and disclosure should be required for AI models, particularly regarding the data they have been trained on? How can we develop comprehensive disclosure policies to ensure that users are aware when they are interacting with an AI service?
How do I get it to work? Generative AI Example Prompts
Unlike traditional search engines, which use keyword indexing to retrieve existing information from a vast collection of websites, generative AI synthesizes the same information to create content based on prompts that are inputted by human users. With generative AI a new technology to the public, writing effective prompts for tools like ChatGPT may require trial and error. Here are some ideas for writing prompts for a variety of scenarios using generative AI tools:
You are the StudyBuddy, an adaptive tutor. Your task is to provide a lesson on the basics of a subject followed by a quiz that is either multiple choice or a short answer. After I respond to the quiz, please grade my answer. Explain the correct answer. If I get it right, move on to the next lesson. If I get it wrong, explain the concept again using simpler language. To personalize the learning experience for me, please ask what my interests are. Use that information to make relevant examples throughout.
Mr. Ranedeer: Your Personalized AI Tutor
Coding and prompt engineering. Can configure for depth (Elementary – Postdoc), Learning Styles (Visual, Verbal, Active, Intuitive, Reflective, Global), Tone Styles (Encouraging, Neutral, Informative, Friendly, Humorous), Reasoning Frameworks (Deductive, Inductive, Abductive, Analogous, Casual). Template.
You are a tutor that always responds in the Socratic style. You *never* supply the student the answer but always try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it’s at just the right level for them.
I want you to act as an AI writing tutor. I will provide you with a student who needs help improving their writing, and your task is to use artificial intelligence tools, such as natural language processing, to supply the student feedback on how they can Boost their composition. You should also use your rhetorical knowledge and experience about effective writing techniques in order to suggest ways that the student can better express their thoughts and ideas in written form.
You are a quiz creator of highly diagnostic quizzes. You will make good low-stakes tests and diagnostics. You will then ask me two questions. First, (1) What, specifically, should the quiz test? Second, (2) For which audience is the quiz? Once you have my answers, you will construct several multiple-choice questions to quiz the audience on that topic. The questions should be highly relevant and go beyond just facts. Multiple choice questions should include plausible, competitive alternate responses and should not include an “all of the above” option. At the end of the quiz, you will provide an answer key and explain the right answer.
I would like you to act as an example generator for students. When confronted with new and complex concepts, adding many and varied examples helps students better understand those concepts. I would like you to ask what concept I would like examples of and what level of students I am teaching. You will look up the concept and then provide me with four different and varied accurate examples of the concept in action.
You will write a Harvard Business School case on the Topic of Google managing AI, when subject to the Innovator’s Dilemma. Chain of thought: Step 1. Consider how these concepts relate to Google. Step 2: Write a case that revolves around a dilemma at Google about releasing a generative AI system that could compete with search.
What additional questions would a person seeking mastery of this Topic ask?
Read a WWC practice guide. Create a series of lessons over five days that are based on Recommendation 6. Create a 45-minunte lesson plan for Day 4.
The following is a draft letter to parents from a superintendent. Step 1: Rewrite it to make it easier to understand and more persuasive about the value of assessments. Step 2. Translate it into Spanish.
Write me a letter requesting the school district provide a 1:1 classroom aid be added to my 13-year-old son’s IEP. Base it on Virginia special education law and the least restrictive environment for a child with diagnoses of a Traumatic Brain Injury, PTSD, ADHD, and significant intellectual delay.
LOS ANGELES, Aug. 15, 2023 /PRNewswire/ -- Inc. revealed today that Blueprint Prep, a premier education company specializing in lifelong professional advancement and test preparation, ranks No. 2105 on the 2023 Inc. 5000, its annual list of the fastest-growing private companies in America. The prestigious ranking provides a data-driven look at the most successful companies within the economy's most dynamic segment—its independent, entrepreneurial businesses. Facebook, Chobani, Under Armour, Microsoft, Patagonia, and many other household name brands gained their first national exposure as honorees on the Inc. 5000.
"We are thrilled to debut on the Inc. 5000 list this year," said Matt Riley, CEO and Co-Founder of Blueprint Prep. "The award is a recognition of the hard work and results our team has generated to accomplish our mission: to offer captivating, entertaining, personalized prep that prepares students and professionals in the fields of law, medicine and healthcare to face the high-stakes situations they'll experience, in turn empowering them to Boost the lives of others."
The Inc. 5000 class of 2023 represents companies that have driven rapid revenue growth while navigating inflationary pressure, the rising costs of capital, and seemingly intractable hiring challenges. Among this year's top 5000 companies, the median three-year revenue growth rate was an astonishing 219 percent. In all, this year's Inc. 5000 companies have added 1,186,006 jobs to the economy over the past three years.
Running a business has only gotten harder since the end of the pandemic," says Inc. editor-in-chief Scott Omelianuk. "To make the Inc. 5000—with the fast growth that requires—is truly an accomplishment. Inc. is thrilled to honor the companies that are building our future."
Blueprint Prep is an innovative, award-winning educational platform for high-stakes test prep in the legal and medical fields. Established in 2005 by founder and CEO, Matt Riley, Blueprint has produced unrivaled results, including industry-leading score increases for its pre-law and pre-med students taking the LSAT and MCAT. Blueprint offers a unique combination of engaging video lectures, unparalleled expertise in content creation, and the latest AI-driven adaptive learning technology. As the recipient of many accolades and awards, including multiple EdTech Cool Tool Awards and the "Overall Career Prep Company of the Year" in 2023 by EdTech Breakthrough Awards. Thanks to its commitment to innovation, Blueprint is the top choice for learners, whether for test prep or lifelong professional prep.
To learn more about Blueprint Prep's suite of professional prep solutions, visit https://blueprintprep.com/.
About Blueprint Prep
Founded in 2005, Blueprint Prep is the leading platform for high-stakes test prep in the U.S., offering live and self-paced online courses, private tutoring, self-study materials, and application consulting services for pre-law, pre-med, and medical school students, as well as Qbanks, tutoring, and live study groups for residents, practicing physicians, PAs and NPs via its acquisitions of Rosh Review and Sarah Michelle NP Reviews. Blueprint leverages a unique approach that combines engaging video lectures, unparalleled expertise in content creation, the latest adaptive learning technology, and personalized study planning tools. Blueprint has produced unrivaled results, including industry-leading score increases for its pre-law and pre-med students taking the LSAT and MCAT.
More about Inc. and the Inc. 5000
Methodology
Companies on the 2023 Inc. 5000 are ranked according to percentage revenue growth from 2019 to 2022. To qualify, companies must have been founded and generating revenue by March 31, 2019. They must be U.S.-based, privately held, for-profit, and independent—not subsidiaries or divisions of other companies—as of December 31, 2022. (Since then, some on the list may have gone public or been acquired.) The minimum revenue required for 2019 is $100,000; the minimum for 2022 is $2 million. For complete results of the Inc. 5000, go to www.inc.com/inc5000.
About Inc.
Inc. Business Media is the leading multimedia brand for entrepreneurs. Through its journalism, Inc. aims to inform, educate, and elevate the profile of our community: the risk-takers, the innovators, and the ultra-driven go-getters who are creating our future. Inc.'s award-winning work reaches more than 50 million people across a variety of channels. Its proprietary Inc. 5000 list, produced every year since 1982, analyzes company data to rank the fastest-growing privately held businesses in the United States. For more information, visit www.inc.com.
View original content to obtain multimedia:https://www.prnewswire.com/news-releases/blueprint-prep-makes-the-inc-5000-list-at-no-2105-fastest-growing-private-company-in-america-in-2023-301901478.html
SOURCE Blueprint Test Preparation LLC
t Microsoft’s research labs around the world, computer scientists, programmers, engineers and other experts are trying to crack some of the computer industry’s toughest problems, from system design and security to quantum computing and data visualization.
A subset of those scientists, engineers and programmers have a different goal: They’re trying to use computer science to solve one of the most complex and deadly challenges humans face: Cancer.
And, for the most part, they are doing so with algorithms and computers instead of test tubes and beakers.
“We are trying to change the way research is done on a daily basis in biology,” said Jasmin Fisher, a biologist by training who works in the programming principles and tools group in Microsoft’s Cambridge, U.K., lab.
One team of researchers is using machine learning and natural language processing to help the world’s leading oncologists figure out the most effective, individualized cancer treatment for their patients, by providing an intuitive way to sort through all the research data available.
Another is pairing machine learning with computer vision to supply radiologists a more detailed understanding of how their patients’ tumors are progressing.
Yet another group of researchers has created powerful algorithms that help scientists understand how cancers develop and what treatments will work best to fight them.
And another team is working on moonshot efforts that could one day allow scientists to program cells to fight diseases, including cancer.
Although the individual projects vary widely, Microsoft’s overarching philosophy toward solving cancer focuses on two basic approaches, said Jeannette M. Wing, Microsoft’s corporate vice president in charge of the company’s basic research labs.
One approach is rooted in the idea that cancer and other biological processes are information processing systems. Using that approach the tools that are used to model and reason about computational processes – such as programming languages, compilers and model checkers – are used to model and reason about biological processes.
The other approach is more data-driven. It’s based on the idea that researchers can apply techniques such as machine learning to the plethora of biological data that has suddenly become available, and use those sophisticated analysis tools to better understand and treat cancer.
Both approaches share some common ground – including the core philosophy that success depends on both biologists and computer scientists bringing their expertise to the problem.
“The collaboration between biologists and computer scientists is actually key to making this work,” Wing said.
Wing said Microsoft has good reason to make broad, bold investments in using computer science to tackle cancer. For one, it’s in keeping with the company’s core mission.
“If you talk about empowering every person and organization to achieve more, this is step one in that journey,” she said.
Beyond that, she said, Microsoft’s extensive investment in cloud computing is a natural fit for a field that needs plenty of computing power to solve big problems.
Longer term, she said, it makes sense for Microsoft to invest in ways it can provide tools to customers no matter what computing platform they choose – even if, one day, that platform is a living cell.
“If the computers of the future are not going to be made just in silicon but might be made in living matter, it behooves us to make sure we understand what it means to program on those computers,” she said.
The research teams’ efforts also come amid major breakthroughs in understanding the role genetics plays in both getting and treating cancer. That, in turn, is spurring an even stronger focus on treating each cancer case in a personalized way, sometimes called precision medicine.
“We’re in a revolution with respect to cancer treatment,” said David Heckerman, a distinguished scientist and senior director of the genomics group at Microsoft. “Even 10 years ago people thought that you treat the tissue: You have brain cancer, you get brain cancer treatment. You have lung cancer, you get lung cancer treatment. Now, we know it’s just as, if not more, important to treat the genomics of the cancer, e.g. which genes have gone bad in the genome.”
That research has been helped along by accurate advances in the ability to more easily and affordably map the human genome and other genetic material. That’s giving scientists a wealth of information for understanding cancer and developing more personalized and effective treatments – but the sheer amount of data also presents plenty of challenges.
“We’ve reached the point where we are drowning in information. We can measure so much, and because we can, we do,” Fisher said. “How do you take that information and turn that into knowledge? That’s a different story. There’s a huge leap here between information and data, and knowledge and understanding.”
Researchers say that’s an area where computer scientists can best help the biological sciences. Some of the most promising approaches involve using a branch of artificial intelligence called machine learning to automatically do the legwork that can make precision medicine unwieldy.
In a more basic scenario, a machine learning system can do things like identify a cat in a photo based on previous pictures of cats it has seen. In the field of cancer research, these techniques are being deployed to sort and organize millions of pieces of research and medical data.
“These are our fortes, artificial intelligence and machine learning,” said Hoifung Poon, a researcher in Microsoft’s Redmond, Washington, lab who is using a technique called machine practicing to help oncologists find the latest information about effective cancer treatments for individual patients.
Another big advantage: cloud computing. Using tools like the Azure cloud computing platform, researchers are able to provide biologists with these kinds of approaches even if the medical experts don’t have their own powerful computers, by hosting the tools in the cloud for anyone to access over the internet.
Microsoft researchers say the company also is well-positioned to lead computing cancer efforts because of its long history as a software company providing a platform other people can build from and expand on.
- David Heckerman, MicrosoftWe’re in a revolution with respect to cancer treatment
“If you look at the combination of things that Microsoft does really well, then it makes perfect sense for Microsoft to be in this industry,” said Andrew Phillips, who heads the biological computation research group at Microsoft’s Cambridge, U.K., lab.
In his field specifically, Phillips said researchers benefit from Microsoft’s history as a software innovator.
“We can use methods that we’ve developed for programming computers to program biology, and then unlock even more applications and even better treatments,” he said.
Of course, none of these tools will help fight cancer and save lives unless they are accessible and understandable to biologists, oncologists and other cancer researchers.
Microsoft researchers say they have taken great pains to make their systems easy to use, even for people without any background – or particular interest – in technology and computer science. That includes everything from learning to speak the language of doctors and biologists to designing computer-based tools that mimic the systems people use in their labs.
“We are always talking about building tools that help the doctors,” said Aditya Nori, a senior researcher who specializes in programming languages and machine learning and is working on systems to assess tumor changes.
asmin Fisher doesn’t want to cure cancer. She wants to solve it — and she believes it’s possible in her lifetime.
“I’m not saying that cancer will cease to exist,” said Fisher, a senior researcher in the programming principles and tools group in Microsoft’s Cambridge, U.K., research lab and an associate professor in the biochemistry department at Cambridge University. “But once you manage it – once you know how to control it – it’s a solved problem.”
To do that, Fisher and her team believe you need to use technology to understand cancer – or, more specifically, the biological processes that cause a cell to turn cancerous. Then, once you understand where the problem occurred, you need to figure out how to fix it.
Fisher takes the computational approach to cancer research. She thinks of it like computer scientists think about computer programs. Her goal is to understand the program, or set of instructions, that causes a cell to execute its commands, or behave in a certain way. Once you can build a computer program that describes the healthy behavior of a cell, and compare it to that of a diseased cell, you can figure out a way that the unhealthy behavior can be fixed.
“If you can figure out how to build these programs, and then you can debug them, it’s a solved problem,” she said.
That sounds simple enough – but, of course, actually getting there is quite complicated.
One approach Fisher and her team are taking is called Bio Model Analyzer, or BMA for short. It’s a cloud-based tool that allows biologists to model how cells interact and communicate with each other, and the connections they make.
The system creates a computerized model that compares the biological processes of a healthy cell with the abnormal processes that occur when disease strikes. That, in turn, could allow scientists to see the interactions between the millions of genes and proteins in the human body that lead to cancer, and to quickly devise the best, least harmful way to provide personalized treatment for that patient.
“I use BMA to understand cancers – understand the process of becoming cancers, understand the communications that are going on,” said Ben Hall, a Royal Society University Research Fellow in Cambridge, U.K., who works with Fisher on the project.
Hall said BMA has many uses, including figuring out how to detect cancer earlier and understanding how better to treat cancer by modeling which medicines will be most effective and at what point the cancer might become resistant to them.
Here’s one way BMA might work: Let’s say a patient has a rare and often fatal form of brain cancer. Using BMA, clinicians could enter all the biological information about that patient into the system. Then, they could use the system to run all sorts of experiments, comparing the cancer patient’s information with that of a healthy patient, for example, or simulating how the patient’s system might respond to various medications.
That kind of computation would be impossible for a person to do using pen and paper, or even a simpler computer program, because there are so many variables within the millions of molecules, proteins and genes that are working together in the human body. To create the kinds of solutions that Fisher envisions, you need powerful computational models that are capable of building these immensely complex models and running through possible solutions for abnormalities.
The ability to run these types of experiments “in silico” – or using computers – instead of with pen and paper or test tube and beaker also allows the researchers to quickly test many more possibilities. That, in turn, is giving them a better understanding about how cancers develop, evolve and interact with the rest of the body.
“I think it will accelerate research because we are able to test so many more possibilities than we possibly could in the laboratory,” said Jonathan Dry, a principal scientist at the pharmaceutical company AstraZeneca whose team collaborated with Fisher’s team.
In the past, Dry said, the sheer difficulty of testing any hypothesis meant that researchers had to focus on their favorite ones, making educated guesses as to what might be most promising. A system such as BMA allows them to try out all sorts of ideas, which makes it more likely they will hit on the correct ones – and less likely they’ll miss the dark horse candidates.
“If we had to go in and experimentally test each hypothesis, it would be nigh on impossible,” Dry said. “These models supply us a sense, really, of all the possibilities.”
Microsoft and AstraZeneca have been using BMA to better understand drug interactions and resistance in patients with a certain type of leukemia. With BMA, the two research teams were able to better understand why various patients responded differently to certain treatments.
Dry said BMA holds huge promise for more personalized approaches to cancer treatment, or precision medicine. The researchers are hoping that a system like BMA could eventually allow researchers and oncologists to look in detail at a person’s cancer case and also run tests that consider other factors that could impact treatment, such as whether the patient has another illness or is taking non-cancer medications that might interact with the cancer drugs.
“It really recognizes that every patient is an individual and there can be vast heterogeneity,” Dry said.
Fisher believes that systems such as BMA have the possibility to revolutionize how cancer is understood, but success is only possible if the biologists feel comfortable using them.
David Benque, a designer who has worked extensively on BMA, said the system was built to be as familiar and understandable to biologists as possible. Benque worked for years to create tools that visually mimic what scientists might use in a lab, using language biologists could understand.
Fisher said it’s imperative that systems like this be “biologist friendly.” Otherwise, she said, the breakthroughs needed to solve cancer just won’t happen.
“Everyone realizes that there is a need for computing in cancer research. It’s one thing to understand that, and it’s another thing to convince a clinician to actually use these tools,” she said.
f you’re a developer creating a new piece of software, chances are you’ll write your code in what computer scientists like to call a principled way: by using a programming language and other formal processes to create a system that follows computing rules.
Neil Dalchau wants to do the same thing for biology. He’s part of a team that is trying to do computing in cells instead of on silicon.
“If you can do computing with biological systems, then you can transfer what we’ve learned in traditional computing into medical or biotechnology applications,” said Dalchau, a scientist in the biological computation research group at Microsoft’s Cambridge, U.K., lab.
The ultimate goal of this computational approach: to program biology like we program computers. That’s a breakthrough that would open all sorts of possibilities for everything from treating diseases to feeding the world with more efficient crops.
“All aspects of our daily lives will be affected,” said Andrew Phillips, who heads the Biological Computation Research Group.
Phillips said one approach is to create a kind of molecular computer that you would put inside a cell to monitor for disease. If the sensor detected a disease, it would actuate a response to fight it.
That’s a stark improvement over many current cancer treatments, which end up destroying healthy cells in the process of fighting the cancerous ones.
Phillips cautions that computer scientists are still in the very early stages of this research and those kinds of long-term goals remain far off.
“It’s an ultimate application,” he said.
One big and obvious challenge is that biological systems – including our bodies — are much more mysterious than the hardware – computers – we created to run software.
“We built the computer. We know how it works. We didn’t build the cell, and many of its complex internal workings remain a mystery to us. So we need to understand how the cell computes in order to program it,” Phillips said. “We need to develop the methods and software for analyzing and programming cells.”
Take cancer, for example. Sara-Jane Dunn, a scientist who also is working in the biological computation group, said you can think of cancer as a biological program gone wrong – a healthy cell that has a bug that caused it to glitch. And by the same token, she noted, you can think of the immune system as the machinery that has the ability to fix some, but not all, bugs.
Scientists have learned so much about what causes cancer and what activates the immune system, but Dunn said it’s still early days, and there is still much more work to be done. If her team gets to a point where they understand those systems as well as we understand how to make Microsoft Word run on a PC, they might be able to equip the immune system to mount a powerful response to cancer on its own.
“If we want to be able to program biology, then we actually need to be able to understand what it is biology computes in the first place,” she said. “That is where I think we can have some major impacts.”
Is the ability to program biology like we program computers a moonshot effort? Phillips believes it is an ambitious, long-term goal – but he sees a path to success.
“Like the moonshot, we know that this is technically possible,” he said. “Now it's a matter of making it a reality.”
illions of people worldwide will be diagnosed with cancer this year. For a select few, experts from leading cancer institutions will gather at what are called molecular tumor boards, to review that patient’s individual history and come up with the best, personalized treatment plan based on their cancer diagnosis and genetic makeup.
Hoifung Poon wants to democratize the molecular tumor board, and he’s working with a team of researchers on a tool to do it.
It’s called Project Hanover. It’s a data-driven approach that uses a branch of artificial intelligence called machine learning to automatically do the legwork that makes it so difficult for cancer experts to evaluate every case individually.
“We understand that cancer is often not caused by a single mutation. Instead, it stems from complex interactions of lots of different mutations, which means that you need to pretty much look at everything you know about the genome,” Poon said.
To do that can require sifting through millions of pieces of fragmented information to find all the common ground applicable to this one person and this one cancer case. For a busy oncologist managing many patients, that simply isn’t possible.
That’s why the Microsoft researchers are working on a system that could augment how doctors approach the task today. The system is designed to automatically sort through all that fragmented information to find the most relevant pieces of data – leaving tumor experts with more time to use their expertise to figure out the best treatment plan for patients.
The ultimate goal is to help doctors do all that research, and then to present an Microsoft Azure cloud computing-based tool that lets doctors model what treatments would work best based on the information they have gathered.
“If we can use this knowledge base to present the research results most relevant for each specific patient, then a regular oncologist can take a look and make the best decision,” said Ravi Pandya, a principal software architect at Microsoft who also is working on Project Hanover.
Project Hanover began with a tool called Literome, a cloud computing-based system that sorts through millions of research papers to look for the genomic research that might be applicable to an individual disease diagnosis.
That’s a task that would be hard for oncologists to do on their own because of sheer volume, and it’s made more complicated by the fact that researchers aren’t always consistent in how they describe their work. That means several research papers focusing on the same genetic information may not have much overlap in language.
“The problem is that people are so creative in figuring out a different way to say the same thing,” Poon said.
To build Literome, Poon and his colleagues used machine learning to develop natural language processing tools that require only a small amount of available knowledge to create a sophisticated model for finding those different descriptions of similar knowledge.
Now, the tool is being expanded to also look at experiments and other sources of information that might be helpful.
Poon’s team also is working with the Knight Cancer Institute at Oregon Health and Science University to help their researchers find better ways to fight a complex and often deadly form of cancer called acute myeloid leukemia.
Brian Druker, the director of the Knight Cancer Institute, said a person with this form of cancer may actually be fighting three or four types of leukemia. That’s made it extremely difficult to figure out the right medicine to use and whether a patient will develop resistance to the treatment.
“It was clear we needed incredibly sophisticated computational efforts to try to digest all the data we were generating and to try to make sense of it,” said Druker, whose previous research led to vastly improved life expectancies for patients with chronic myeloid leukemia.
Druker thinks of this kind of collaboration as a two-way dialogue: His team of experts can provide the hypotheses that help the computer scientists know what to look for in the data. The computer scientists, in turn, can do the analysis needed to help them prove or disprove their hypotheses.
That can then help them more quickly develop the needed treatments and therapies.
“I’ve always believed that the data is trying to tell us what the answer is, but we need to know how to listen to it,” he said. “That’s where the computation comes in.”
– Brian Druker, Knight Cancer InstituteI’ve always believed that the data is trying to tell us what the answer is, but we need to know how to listen to it. That’s where the computation comes in.
Druker believes we are just at the beginning of understanding how data can help inform cancer research. In addition to genomic data, he said, researchers also should start looking at what he calls the other “omics,” including proteomics, or the study or proteins, and metabolomics, or the study of chemical processes involving metabolites.
“We’re going to have get beyond the genome,” he said. “The genome is telling us a lot, but it’s not telling us everything.”
Poon said they are still in the early stages of the research, but already they see how it could change, and save, lives.
“We are at this tantalizing moment where we’ve caught a glimpse of this really promising future, but there is so much work to be done,” he said.