WENTWORTH – From computers and healthcare to manufacturing and manicuring,
Today’s businesses are adopting cloud computing infrastructures more than ever to optimize their operations. In response, information technology professionals are developing robust cloud technology skill sets to serve these organizations better. Certifications in cloud-based tools from major players like Google offer IT professionals a competitive edge while demonstrating in-demand career skills to hiring managers.
We’ll examine the most essential Google Cloud certifications to help you navigate your career path and show off your skills to potential employers.
Here’s a more in-depth look at Google’s certification offerings.
Google phased out its G Suite certification program in favor of the more focused Cloud Digital Leader (CDL) certificate.
CDL is the only Google Cloud certification designed for people in non-technical roles. CDLs have the following characteristics:
The CDL test tests candidates’ knowledge of the following areas:
The CDL certification is an excellent Google Cloud entry point for professionals in operational, marketing, or other non-technical roles. It’s the only foundational certificate Google Cloud currently offers. It doesn’t require any specific industry experience – just a desire to learn more about the products and share them with others.
Cloud hosting is a more secure, reliable, and often cost-efficient way to store data than the traditional server-based method.
An Associate Cloud Engineer (ACE) performs the following functions:
The ACE test evaluates the following skills:
The ACE credential is best suited to early-stage or mid-career IT professionals interested in cloud computing who work with Google Cloud already or intend to in the future. The ACE certification training courses offer professionals a great way to hone the skills they need to launch and manage Google Cloud Solutions in their workplace.
Certification name |
Associate Cloud Engineer certification |
---|---|
Prerequisites and required courses |
None required. Google recommends six months of hands-on experience with Google Cloud. |
Number of exams |
One |
Cost per exam |
$125 |
URL |
|
Self-study materials |
Follow the free Cloud Engineer learning path. |
Recertification information |
ACE certifications are valid for three years. You must recertify during the specified eligibility period. |
Professional Cloud Architects (PCAs) empower organizations to make effective and efficient use of Google Cloud technologies. PCAs must develop a thorough understanding of Google Cloud solutions and the broader functionality of cloud architecture. Those who hold this credential can design, develop and manage dynamic Google Cloud solutions to meet business objectives that are robust, secure, scalable, and highly available.
The PCA certification test evaluates the following skills:
Additionally, test takers must respond to questions about several case studies.
The PCA certificate represents a more senior credential that might appeal to more established IT professionals with at least three years of experience. PCAs should be knowledgeable and interested in the vast array of Google Cloud tools and business objectives.
Certification name |
Professional Cloud Architect certification |
---|---|
Prerequisites and required courses |
None required. Google recommends three years of industry experience with Google Cloud solution management. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
|
Self-study materials |
Follow the free Cloud Architect learning path. |
Recertification information |
Certification is valid for two years. You must recertify during the specified eligibility period. |
Tip: While the ACE certificate isn’t a prerequisite for PCA certification, it covers many of the skills you’ll need. If you’re interested in becoming a PCA but don’t have the experience, the ACE certificate can be a good place to start.
A Professional Cloud Database Engineer (PCDE) is a database certification that focuses on designing, operating, and troubleshooting Google Cloud databases. Ideally, PCDEs have broad IT and database experience and are comfortable creating and managing database solutions.
The PCDE certification test evaluates the following skills:
PCDEs are highly skilled professionals. They should have over five years of database experience and at least two years of hands-on experience with Google Cloud databases.
Certification name |
Professional Cloud Database Engineer certification |
---|---|
Prerequisites and required courses |
None required. Google recommends five years of industry experience and two years working with Google Cloud databases. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
https://cloud.google.com/certification/cloud-database-engineer |
Self-study materials |
Follow the free Database Engineer learning path. |
Recertification information |
Certification is valid for two years. You must recertify during the specified eligibility period. |
The Professional Data Engineer (PDE) focuses on analyzing and applying data stored in the Google Cloud instead of designing, deploying, or maintaining those systems. PDEs assist their organizations in making data-informed decisions by providing visualizations and expert guidance. They also work closely with existing machine-learning models. The PDE curriculum and test emphasize data-processing security, reliability, and fault-tolerance, as well as scalability, accuracy, and efficiency.
The PDE certification test evaluates the following skills:
IT professionals with experience or interest in big data, data analysis, and machine learning could benefit from the PDE certification. It’s also an excellent credential for people with backgrounds and interests in mathematics and data modeling.
Certification name |
Professional Data Engineer certification |
---|---|
Prerequisites and required courses |
None required. Google recommends three years of industry experience and at least a year designing and managing Google Cloud solutions. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
|
Self-study materials |
Follow the free Data Engineer learning path. |
Recertification information |
Certifications are valid for two years. You must recertify during the specified eligibility period. |
If you have experience and interest in big data, check out our overview of the best big data certifications.
The Professional Cloud Developer (PCD) certification is ideal for candidates who use Google Cloud services, tools, and recommended practices to create applications. Candidates should possess the skills necessary to successfully integrate Google Cloud services and conduct application performance monitoring. While the test does not test coding ability, candidates should be proficient in at least one general-use coding language.
The PCD certification test evaluates the following skills:
Certification name |
Professional Cloud Developer certification |
---|---|
Prerequisites and required courses |
None required. Google recommends three years of industry experience and at least a year designing and managing Google Cloud solutions. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
|
Self-study materials |
Follow the free Cloud Developer learning path. |
Recertification information |
Certification is valid for two years. You must recertify during the specified eligibility period. |
A Professional Cloud Network Engineer (CNE) is responsible for executing and maintaining network architectures within Google Cloud. In addition to Google Cloud products, successful candidates should be skilled in working with technologies like hybrid connectivity, network architecture security, VPCs, network services, and the Google Cloud Console command-line interface.
The PCD certification test evaluates the following skills:
The CNE certification could be a good fit for a network or systems administrator who has been working with cloud technology for a few years.
Certification name |
Professional Cloud Network Engineer |
---|---|
Prerequisites and required courses |
None required. Google recommends three years of industry experience and at least a year designing and managing Google Cloud solutions. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
https://cloud.google.com/certification/cloud-network-engineer |
Self-study materials |
Follow the free Network Engineer learning path. |
Recertification information |
Certification is valid for two years. You must recertify during the specified eligibility period. |
Network engineers may also want to consider other networking certifications, including the Cisco Certified Internetwork Expert (CCIE) and Cisco Certified Network Professional (CCNP).
Another popular option in the Google Cloud certification portfolio is the Professional Cloud Security Engineer (CSE). CSEs aim to prevent and avoid network security threats. They’re well-versed in industry best practices and security protocols, including:
The CSE certification test evaluates the following skills:
The CSE engineer certification is an asset for any network administrators already working in cloud technologies. However, all network administrators could benefit from the training as more companies make the leap to the cloud.
Certification name |
Professional Cloud Security Engineer |
---|---|
Prerequisites and required courses |
None required. Google recommends three years of industry experience and at least a year designing and managing Google Cloud solutions. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
https://cloud.google.com/certification/cloud-security-engineer |
Self-study materials |
Follow the free Security Engineer learning path. |
Recertification information |
Certification is valid for two years. You must recertify during the specified eligibility period. |
A Cloud DevOps engineer develops and operates cloud-based applications and services. They analyze a cloud environment’s elements and functions to ensure service reliability.
The DevOps certification test evaluates the following skills:
Certification name |
Cloud DevOps engineer |
---|---|
Prerequisites and required courses |
None required. Google recommends three years of industry experience and at least a year designing and managing Google Cloud solutions. |
Number of exams |
One |
Cost per exam |
$200 |
URL |
https://cloud.google.com/certification/cloud-devops-engineer |
Self-study materials |
Follow the free DevOps Engineer learning path. |
Recertification information |
Certification is valid for two years. You must recertify during the specified eligibility period. |
A Professional Google Workspace Administrator certification is a mid-career certification for IT professionals who manage and configure Google Workspace services, objects, and environments. These individuals understand their organization’s infrastructure and ensure secure collaboration, communication, and data access.
The Google Workspace Administrator certification test evaluates the following skills:
Google Workspace used to be called G Suite. It’s now revamped and includes word processing, spreadsheets, Google Forms, cloud storage, and more.
Professional Machine Learning engineers create and build machine-learning models to facilitate Google Cloud technologies. This is an advanced IT certification for machine-learning professionals.
The Professional Machine Learning Engineer certification test evaluates the following skills:
Google's SGE within Search Labs is beginning to offer source links for queries in Search.
It's not as widespread, though, it has displayed several source links after pieces of information in case users would like to double check.
Google has wanted to ensure it is developing "responsible" AI and its source linking was also seen mentioned for its AI NotebookLM tool.
It looks like Google is beginning to test a way of bringing a little more reliability to its AI-powered Search experience.
According to 9to5Google, the company has started a test for its SGE (Search Generative Experience) where it displays very clear in-line source links. Though it's not quite as widespread, it looks like the source links will offer the name of the publication or website directly after a statement so users can do some fact-checking.
The first instance of these links appearing showed Google citing three websites for a piece of information it offered on the Galaxy Z Flip 5.
The AI software continued this trend throughout all of its provided information, with article previews in a carousel on the right side of the text box if users would prefer to look there.
The bot also offered some follow-up questions to the 9to5's query.
Google's SGE was detailed a bit after its I/O event back in May. The company opened access to Search Labs so users could experiment and test its generative AI for its search engine. The purpose of it was to deliver quick summaries of relevant information about a user's query so they could get caught up quickly without needing to do a lot of excessive reading.
It was also during this time Google showed off its generative AI's existence in search through its colored text boxes alongside the ability to ask a follow-up question.
Moreover, Google's inclusion of more prominent source linking is probably due to its interest in creating "responsible" AI mechanics and tools for users. We've seen the company show an interest in ensuring users can trust its AI when it announced NotebookLM.
Google's AI notetaking app was made to help students and others gather facts from their notes quickly. Notetakers will also gain an AI bot tailored specifically for whatever syllabu they're interested in to help source more information from the web.
"Source" is key here at NotebookLM's AI software will also provide a link to where it took the information from so users can fact-check its work.
In Neal Stephenson’s 1995 science fiction novel, The Diamond Age, readers meet Nell, a young girl who comes into possession of a highly advanced book, The Young Lady’s Illustrated Primer. The book is not the usual static collection of texts and images but a deeply immersive tool that can converse with the reader, answer questions, and personalize its content, all in service of educating and motivating a young girl to be a strong, independent individual.
Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fiction—until now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.
Sundar Pichai, Google’s CEO, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past.” Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Bill Gates has said that “this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.”
Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
What Is Generative AI?
Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider “intelligent” if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.
“Foundation models” in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.
Large Language Models (LLMs) are foundation models that have been trained on a vast amount of text data. For example, the training data for OpenAI’s GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAI’s GPT-3 models underwent training on a staggering 300 billion “tokens” or word pieces, using more than 175 billion parameters to shape the model’s behavior—nearly 100 times more data than the company’s GPT-2 model had.
By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what syllabus are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.
LLMs include OpenAI’s GPT-4, Google’s PaLM, and Meta’s LLaMA. These LLMs serve as “foundations” for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Google’s Pathways Language Model 2 (PaLM 2) as its foundation.
Some of the best-known applications are:
ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.
ChatGPT 4.0. The latest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that give it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.
Microsoft Bing Chat. An iteration of Microsoft’s Bing search engine that is enhanced with OpenAI’s ChatGPT technology. It can browse websites and offers source citations with its results.
Google Bard. Google’s AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bard’s replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.
Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 words—about the length of The Great Gatsby—and generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of “constitution” for AI systems, with the aim of making them more helpful, honest, and harmless.
These AI systems have been improving at a remarkable pace, including in how well they perform on assessments of human knowledge. OpenAI’s GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these feats especially impressive is that OpenAI did not specifically train the system to take these exams; the AI was able to come up with the correct answers on its own. Similarly, Google’s medical AI model substantially improved its performance on a U.S. Medical Licensing Examination practice test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.
These two examples prompt one to ask: if AI continues to Improve so rapidly, what will these systems be able to achieve in the next few years? What’s more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Google’s AI model, physicians preferred the AI’s long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbot’s responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when “empathetic” AI is used in education?
Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems “exhibit more general intelligence than previous AI models” and are coming “strikingly close to human-level performance.” While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.
Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely false—an anomaly known as “hallucination.” The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.
The Importance of Well-Designed Prompts
Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).
One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesn’t create anything new but efficiently retrieves what’s already there.
Generative AI is more akin to a competent intern. You give a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results aren’t pre-made or stored somewhere—they’re produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you give it, the better the result will be. What’s more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.
One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to “be.” Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then give it a task.
Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, let’s say a teacher wants to create an adaptive tutoring program—for any subject, any grade, in any language—that customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.
Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isn’t perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students’ interests.
Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.
However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AI’s output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. It’s through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.
Uses of AI in Education
In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that “constituents believe that action is required now in order to get ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together.” People expressed anxiety about “future potential risks” with AI but also felt that “AI may enable achieving educational priorities in better ways, at scale, and with lower costs.”
AI could serve—or is already serving—in several teaching-and-learning roles:
Instructional assistants. AI’s ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.
Teaching assistants. AI might tackle some of the administrative tasks that keep teachers from investing more time with their peers or students. Early uses include automated routine tasks such as drafting lesson plans, creating differentiated materials, designing worksheets, developing quizzes, and exploring ways of explaining complicated academic materials. AI can also provide educators with recommendations to meet student needs and help teachers reflect, plan, and Improve their practice.
Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a child’s interests.
Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.
Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say they’ve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. It’s likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.
Google’s Project Tailwind is experimenting with an AI notebook that can analyze student notes and then develop study questions or provide tutoring support through a chat interface. These features could soon be available on Google Classroom, potentially reaching over half of all U.S. classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachers—differentiating content, drafting lesson plans, providing student feedback, and serving as an AI assistant to streamline workflow among different apps and tools.
Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies’ products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.
Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, “We should not only think about how technology can assist teachers and learners in improving what they’re doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.”
Challenges and Risks
Along with these potential benefits come some difficult challenges and risks the education community must navigate:
Student cheating. Students might use AI to solve homework problems or take quizzes. AI-generated essays threaten to undermine learning as well as the college-entrance process. Aside from the ethical issues involved in such cheating, students who use AI to do their work for them may not be learning the content and skills they need.
Bias in AI algorithms. AI systems learn from the data they are trained on. If this data contains biases, those biases can be learned and perpetuated by the AI system. For example, if the data include student-performance information that’s biased toward one ethnicity, gender, or socioeconomic segment, the AI system could learn to favor students from that group. Less cited but still important are potential biases around political ideology and possibly even pedagogical philosophy that may generate responses not aligned to a community’s values.
Privacy concerns. When students or educators interact with generative-AI tools, their conversations and personal information might be stored and analyzed, posing a risk to their privacy. With public AI systems, educators should refrain from inputting or exposing sensitive details about themselves, their colleagues, or their students, including but not limited to private communications, personally identifiable information, health records, academic performance, emotional well-being, and financial information.
Decreased social connection. There is a risk that more time spent using AI systems will come at the cost of less student interaction with both educators and classmates. Children may also begin turning to these conversational AI systems in place of their friends. As a result, AI could intensify and worsen the public health crisis of loneliness, isolation, and lack of connection identified by the U.S. Surgeon General.
Overreliance on technology. Both teachers and students face the risk of becoming overly reliant on AI-driven technology. For students, this could stifle learning, especially the development of critical thinking. This challenge extends to educators as well. While AI can expedite lesson-plan generation, speed does not equate to quality. Teachers may be tempted to accept the initial AI-generated content rather than devote time to reviewing and refining it for optimal educational value.
Equity issues. Not all students have equal access to computer devices and the Internet. That imbalance could accelerate a widening of the achievement gap between students from different socioeconomic backgrounds.
Many of these risks are not new or unique to AI. Schools banned calculators and cellphones when these devices were first introduced, largely over concerns related to cheating. Privacy concerns around educational technology have led lawmakers to introduce hundreds of bills in state legislatures, and there are growing tensions between new technologies and existing federal privacy laws. The concerns over bias are understandable, but similar scrutiny is also warranted for existing content and materials that rarely, if ever, undergo review for racial or political bias.
In light of these challenges, the Department of Education has stressed the importance of keeping “humans in the loop” when using AI, particularly when the output might be used to inform a decision. As the department encouraged in its 2023 report, teachers, learners, and others need to retain their agency. AI cannot “replace a teacher, a guardian, or an education leader as the custodian of their students’ learning,” the report stressed.
Policy Challenges with AI
Policymakers are grappling with several questions related to AI as they seek to strike a balance between supporting innovation and protecting the public interest (see sidebar). The speed of innovation in AI is outpacing many policymakers’ understanding, let alone their ability to develop a consensus on the best ways to minimize the potential harms from AI while maximizing the benefits. The Department of Education’s 2023 report describes the risks and opportunities posed by AI, but its recommendations amount to guidance at best. The White House released a Blueprint for an AI Bill of Rights, but it, too, is more an aspirational statement than a governing document. Congress is drafting legislation related to AI, which will help generate needed debate, but the path to the president’s desk for signature is murky at best.
It is up to policymakers to establish clearer rules of the road and create a framework that provides consumer protections, builds public trust in AI systems, and establishes the regulatory certainty companies need for their product road maps. Considering the potential for AI to affect our economy, national security, and broader society, there is no time to waste.
Why AI Is Different
It is wise to be skeptical of new technologies that claim to revolutionize learning. In the past, prognosticators have promised that television, the computer, and the Internet, in turn, would transform education. Unfortunately, the heralded revolutions fell short of expectations.
There are some early signs, though, that this technological wave might be different in the benefits it brings to students, teachers, and parents. Previous technologies democratized access to content and resources, but AI is democratizing a kind of machine intelligence that can be used to perform a myriad of tasks. Moreover, these capabilities are open and affordable—nearly anyone with an Internet connection and a phone now has access to an intelligent assistant.
Generative AI models keep getting more powerful and are improving rapidly. The capabilities of these systems months or years from now will far exceed their current capacity. Their capabilities are also expanding through integration with other expert systems. Take math, for example. GPT-3.5 had some difficulties with certain basic mathematical concepts, but GPT-4 made significant improvement. Now, the incorporation of the Wolfram plug-in has nearly erased the remaining limitations.
It’s reasonable to anticipate that these systems will become more potent, more accessible, and more affordable in the years ahead. The question, then, is how to use these emerging capabilities responsibly to Improve teaching and learning.
The paradox of AI may lie in its potential to enhance the human, interpersonal element in education. Aaron Levie, CEO of Box, a Cloud-based content-management company, believes that AI will ultimately help us attend more quickly to those important tasks “that only a human can do.” Frederick Hess, director of education policy studies at the American Enterprise Institute, similarly asserts that “successful schools are inevitably the product of the relationships between adults and students. When technology ignores that, it’s bound to disappoint. But when it’s designed to offer more coaching, free up time for meaningful teacher-student interaction, or offer students more personalized feedback, technology can make a significant, positive difference.”
Technology does not revolutionize education; humans do. It is humans who create the systems and institutions that educate children, and it is the leaders of those systems who decide which tools to use and how to use them. Until those institutions modernize to accommodate the new possibilities of these technologies, we should expect incremental improvements at best. As Joel Rose, CEO of New Classrooms Innovation Partners, noted, “The most urgent need is for new and existing organizations to redesign the student experience in ways that take full advantage of AI’s capabilities.”
While past technologies have not lived up to hyped expectations, AI is not merely a continuation of the past; it is a leap into a new era of machine intelligence that we are only beginning to grasp. While the immediate implementation of these systems is imperfect, the swift pace of improvement holds promising prospects. The responsibility rests with human intervention—with educators, policymakers, and parents to incorporate this technology thoughtfully in a manner that optimally benefits teachers and learners. Our collective ambition should not focus solely or primarily on averting potential risks but rather on articulating a vision of the role AI should play in teaching and learning—a game plan that leverages the best of these technologies while preserving the best of human relationships.
Policy Matters
Officials and lawmakers must grapple with several questions related to AI to protect students and consumers and establish the rules of the road for companies. Key issues include:
Risk management framework: What is the optimal framework for assessing and managing AI risks? What specific requirements should be instituted for higher-risk applications? In education, for example, there is a difference between an AI system that generates a lesson sample and an AI system grading a test that will determine a student’s admission to a school or program. There is growing support for using the AI Risk Management Framework from the U.S. Commerce Department’s National Institute of Standards and Technology as a starting point for building trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.
Licensing and certification: Should the United States require licensing and certification for AI models, systems, and applications? If so, what role could third-party audits and certifications play in assessing the safety and reliability of different AI systems? Schools and companies need to begin thinking about responsible AI practices to prepare for potential certification systems in the future.
Centralized vs. decentralized AI governance: Is it more effective to establish a central AI authority or agency, or would it be preferable to allow individual sectors to manage their own AI-related issues? For example, regulating AI in autonomous vehicles is different from regulating AI in drug discovery or intelligent tutoring systems. Overly broad, one-size-fits-all frameworks and mandates may not work and could slow innovation in these sectors. In addition, it is not clear that many agencies have the authority or expertise to regulate AI systems in diverse sectors.
Privacy and content moderation: Many of the new AI systems pose significant new privacy questions and challenges. How should existing privacy and content-moderation frameworks, such as the Family Educational Rights and Privacy Act (FERPA), be adapted for AI, and which new policies or frameworks might be necessary to address unique challenges posed by AI?
Transparency and disclosure: What degree of transparency and disclosure should be required for AI models, particularly regarding the data they have been trained on? How can we develop comprehensive disclosure policies to ensure that users are aware when they are interacting with an AI service?
How do I get it to work? Generative AI Example Prompts
Unlike traditional search engines, which use keyword indexing to retrieve existing information from a vast collection of websites, generative AI synthesizes the same information to create content based on prompts that are inputted by human users. With generative AI a new technology to the public, writing effective prompts for tools like ChatGPT may require trial and error. Here are some ideas for writing prompts for a variety of scenarios using generative AI tools:
You are the StudyBuddy, an adaptive tutor. Your task is to provide a lesson on the basics of a subject followed by a quiz that is either multiple choice or a short answer. After I respond to the quiz, please grade my answer. Explain the correct answer. If I get it right, move on to the next lesson. If I get it wrong, explain the concept again using simpler language. To personalize the learning experience for me, please ask what my interests are. Use that information to make relevant examples throughout.
Mr. Ranedeer: Your Personalized AI Tutor
Coding and prompt engineering. Can configure for depth (Elementary – Postdoc), Learning Styles (Visual, Verbal, Active, Intuitive, Reflective, Global), Tone Styles (Encouraging, Neutral, Informative, Friendly, Humorous), Reasoning Frameworks (Deductive, Inductive, Abductive, Analogous, Casual). Template.
You are a tutor that always responds in the Socratic style. You *never* give the student the answer but always try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it’s at just the right level for them.
I want you to act as an AI writing tutor. I will provide you with a student who needs help improving their writing, and your task is to use artificial intelligence tools, such as natural language processing, to give the student feedback on how they can Improve their composition. You should also use your rhetorical knowledge and experience about effective writing techniques in order to suggest ways that the student can better express their thoughts and ideas in written form.
You are a quiz creator of highly diagnostic quizzes. You will make good low-stakes tests and diagnostics. You will then ask me two questions. First, (1) What, specifically, should the quiz test? Second, (2) For which audience is the quiz? Once you have my answers, you will construct several multiple-choice questions to quiz the audience on that topic. The questions should be highly relevant and go beyond just facts. Multiple choice questions should include plausible, competitive alternate responses and should not include an “all of the above” option. At the end of the quiz, you will provide an answer key and explain the right answer.
I would like you to act as an example generator for students. When confronted with new and complex concepts, adding many and varied examples helps students better understand those concepts. I would like you to ask what concept I would like examples of and what level of students I am teaching. You will look up the concept and then provide me with four different and varied accurate examples of the concept in action.
You will write a Harvard Business School case on the syllabu of Google managing AI, when subject to the Innovator’s Dilemma. Chain of thought: Step 1. Consider how these concepts relate to Google. Step 2: Write a case that revolves around a dilemma at Google about releasing a generative AI system that could compete with search.
What additional questions would a person seeking mastery of this syllabu ask?
Read a WWC practice guide. Create a series of lessons over five days that are based on Recommendation 6. Create a 45-minunte lesson plan for Day 4.
The following is a draft letter to parents from a superintendent. Step 1: Rewrite it to make it easier to understand and more persuasive about the value of assessments. Step 2. Translate it into Spanish.
Write me a letter requesting the school district provide a 1:1 classroom aid be added to my 13-year-old son’s IEP. Base it on Virginia special education law and the least restrictive environment for a child with diagnoses of a Traumatic Brain Injury, PTSD, ADHD, and significant intellectual delay.
Google announced today new cybersecurity defense controls that will allow security teams to thwart social engineering attacks like phishing targeting Workspace users and prevent account takeover attempts.
Prominently among these new capabilities is the ability to add an additional layer of protection that requires sensitive Google Workspace actions to be signed off by two admins.
After multi-party approval is enabled and configured on a workspace, admins must have at least one other admin confirm critical changes.
"Once it's been implemented, when an admin initiates a highly sensitive action like a 2SV settings change, any other admin can approve," Google Workspace Director of Product Management Andy Wen told BleepingComputer.
"With this initial framework release, we currently are supporting just 2SV settings change and expanding this capability to other actions based on admin feedback."
The company plans to preview multi-party approval for sensitive Google Workspace actions in the upcoming months.
Starting later this year, the company also plans to require mandatory 2-Step Verification (2SV), also known as two-factor authentication (2FA), for specific enterprise administrators.
"Compromised administrator accounts can have an outsized impact, and 2SV can result in a 50% decrease in accounts being compromised," Google's Yulie Kwon Kim and Andy Wen said.
"Starting later this year, in a phased approach, select administrator accounts of our resellers and largest enterprise customers will be required to add 2SV to their accounts to strengthen their security."
Google is also expanding its AI-powered Gmail defenses to cover more sensitive email actions, including message filtering and forwarding. This capability is now available in preview.
Lastly, Google Workspace now has an expedited pathway for exporting logs to Chronicle, Google's cloud-based Security Operations Suite. This will allow security teams and admins to export Workspace logs quicker to further Improve threat response time.
"Social engineering attacks, such as phishing, are one of the most common entry points for data breaches," Kim and Wen said.
"Threat defense controls in Workspace help customers prevent, detect, and respond to social engineering and other identity-based attacks before they emerge.
Earlier this month, the company announced that it would soon make it easier to remove explicit personal images and personally identifiable information from search results using a privacy-focused tool announced in May 2022 that rolled out in September.
Google also explained how Android malware can slip into the Google Play Store with the help of a tactic known as versioning that enables malicious actors to evade the store's review process and security controls.
Carlos M. Meléndez is the COO and cofounder of Wovenware, a Maxar Company, offering AI and software development services.
More than 350 tech executives and scientists signed a joint statement to express their concerns and warn of the dangers of artificial intelligence (AI), going so far as to say it poses an “extinction risk” on par with pandemics and nuclear war. Released by the nonprofit organization, the Center for AI Safety, some of the biggest names in technology signed the statement, including leaders from Open AI, Microsoft and Google, the very companies that stand to gain the most from generative AI.
Yet many people remain concerned about the dangers of AI. A Yale CEO Summit survey found that 42% of attending executives believe that AI has the potential to be an extinction risk, able to destroy humanity within 10 years.
What has yet to be clearly defined is what is meant by the term “extinction risk.” Many pundits speculate that it can be caused by bad actors leveraging its massive data sets to create bioweapons or introduce new viruses. It also could mean using it to hack into mission-critical computer systems or release deliberately false information that could cause panic across the globe. In another scenario, AI that becomes highly accurate could become a problem unto itself. Imagine an AI algorithm that is so committed to eradicating a specific disease that it destroys everything in its path.
While many of these doomsday scenarios may never come to pass, AI does have the power to cause the dangers that are being discussed. Part of the problem is that the technology is moving faster than what anyone could have predicted. Take, for example, ChatGPT, the popular generative AI solution from OpenAI. When given the CPA test by Accounting Today magazine in April, it failed miserably, but within a few short weeks, it passed with flying colors.
As tech players, large and small, join the generative AI bandwagon, building massive data sets that were inconceivable just a few short months ago, there’s clearly a need for regulatory oversight.
In October of 2022, the White House Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights to require privacy and equity when using or building AI. It identified five principles that should guide the design, use and deployment to protect the American public. These guidelines include:
• Safe and effective systems: AI solutions should be thoroughly tested to evaluate concerns, risks and potential impact.
• Algorithmic discrimination protection: Solutions should be designed in an equitable way to remove the possibility of bias.
• Data privacy: People should have agency over how their data is used and protected against violations of privacy.
• Notice and explanation: There should be clearly stated transparency when AI is being used.
• Human alternatives, considerations and fallback: You should be able to opt out of interactions with AI in favor of a human alternative.
Since this blueprint was established and ChatGPT and other generative AI solutions were released, the Biden administration has been meeting regularly so that they can better understand it and develop strategies to regulate it.
In mid-June of 2023, the European parliament drafted its own regulations for the safe use of AI, and it’s inching closer to passing it. “The AI Act” bans real-time facial recognition in public places, the use of scoring systems and models that use manipulative techniques, full disclosure when generative AI systems have developed content and a means to show data lineage if asked.
But while it’s clear what needs to be included in the code of conduct for building transparent, fair, safe and unbiased AI, how to enforce it is the million-dollar question. Below are some considerations.
Much like the Good Manufacturing Practices (GMP) regulations established by the FDA for life sciences companies, clearly outlined guidelines need to be developed and communicated to companies that want to earn a “good AI practices” designation. This will require oversight by a federal agency comparable to the FDA, charged with conducting inspections and gathering required documentation from any company developing AI solutions.
Whether generative AI is being used to develop content, marketing materials, software code or research, it should be required that there is a highly visible public disclaimer that indicates that parts or all of it were machine-generated.
Google and its AI research laboratory DeepMind recommended several steps to ensure that “high-risk AI systems” provide detailed documentation about their solutions. Among those recommendations that I find most important is that risk assessment from independent organizations should be mandatory.
When AI is making decisions that affect people’s lives, individuals should be able to have an adequate explanation of how the algorithm arrived at a decision.
When deploying AI in a public cloud, it should be required that you not only have permission from the federal government but that the federal government has people whose sole job is to closely monitor the cloud and the projects being deployed there and making it impossible for evil AI to enter.
It’s important that all software engineering and data science students complete the required studies in AI ethics before they can work in the industry. A type of AI ethics certification could be created and enforced. Much like the Hippocratic Oath, in which a physician promises they will “first do no harm,” a data scientist must also vow to do the same when building AI solutions.
We’re in the midst of one of the biggest technology developments in history, and generative AI has the potential to revolutionize all aspects of society for the good and possibly for the bad. As with all other major turning points in history, however, humans need to be driving the bus; using judgments based on fairness, transparency and respect for people first and foremost; and vowing to leverage the potential of AI for the good.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
WENTWORTH – From computers and healthcare to manufacturing and manicuring,
Rockingham Community College has many non-credit, continuing education courses up for grabs in September.
Unless otherwise noted, additional information and registration is available by calling 336-342-4261 ext. 2333 or visiting https://www.rockinghamcc.edu/coned/.
Several programs have financial assistance available through RCC’s Eagle Train & Gain Scholarship, which covers $180 of the registration fee of certain courses. That form can be found here: https://www.cognitoforms.com/RCCAdmissions/EagleTrainGainScholarship.
Artificial Intelligence (AI) Fundamentals: 5:30-7:30 Tuesdays, Sept. 5-26, RCC. This brand-new course introduces students to ChatGPT and other generative artificial intelligence (AI) tools. Students will learn syllabus such as what generative AI is and how to use it, the differences between different AI programs such as OpenAI’s ChatGPT and Google’s Bard, how to use different AI tools, security and regulatory concerns, practical uses for AI, and the future of AI. In addition, students will learn how to incorporate and use AI tools in the workplace. Cost: $74.
Computer Basics: 9 a.m.-noon Fridays, Sept. 1-22, or 5:30-8:30 p.m. Tuesdays, Sept. 5-26, RCC. Develop computer skills essential for success in today’s technology-driven workplace. Course covers the basic functions of Microsoft Word and Excel, emailing, internet navigation, file organization, uploads/downloads, and resources for keyboarding skills. Cost: $70, but is waived for most students.
Electronic Notary: 9 a.m.-1 p.m. Saturday, Sept. 30, RCC. Do you want to become commissioned as an electronic notary? syllabus in this brand-new course include legal, ethical and procedural requirements of the Notary Act set forth in the General Statute 10B Article 2. Upon completion with a passing test grade of 80%, you will be eligible to make application with the N.C. Secretary of State office. You must have a current notary commission to participate in E-notary training. Cost: $70.
Google Data Analytics: Sept. 12, 2023-Jan. 30, 2024, online. In this brand-new course, learn how to make data-driven decisions using effective questions, data transformation, analyzation processes, visualization, and programming. Emphasis is placed on setting up data toolbox, spreadsheets, database and query basics, visualization basics, effective communication techniques, and data validation; as well as design thinking, data-driven storytelling, dashboards, R programming, job portfolios, and technical expertise. Cost: $180, but ask about the Eagle Train and Gain Scholarship.
Google Project Management: Sept. 12, 2023-Jan. 16, 2024, online. Learn the advanced concepts, tools, templates, and artifacts used to manage projects from initiation to completion using Google resources through Agile development. Emphasis is placed on foundational and advanced project management methods. Upon completion, students should be able to manage and run projects and programs from initiation to completion using a variety of resources and leadership skills to support organizational goals and business processes. Cost: $185, but ask about the Eagle Train and Gain Scholarship.
Notary: 9 a.m.-4:30 p.m. Saturday, Sept. 16, RCC. Learn the qualifications of the notary public office and requirements for attestation, fees, general powers and limitations, certifications, oaths, affirmations and affidavits. Purchase of Notary manual is required. Cost: $74.
Certified Production Technician 4.0: 5-9:30 p.m. Mondays-Wednesdays, Sept. 11-Nov. 1, 9 a.m.-3 p.m. Saturdays, Oct. 7 and 28, RCC, plus online coursework. This is a brand-new, online and in-person course, offered in an accelerated, eight weeks. Learn the basic and technical skills needed to prepare for an advanced, high-performance manufacturing environment. Topics: Safety, quality practices and measurement; manufacturing processes and production; maintenance awareness and green production. Cost: $190, but ask about the Eagle Train and Gain Scholarship.
Manicurist: 8:30 a.m.-4:30 p.m. Fridays and Saturdays, Sept. 8, 2023-April 13, 2024, RCC. This brand-new comprehensive course provides instruction and clinical practice in manicuring, nail building (application and maintenance of artificial nails) and pedicuring, for students who want to become a registered manicurist and not a licensed cosmetologist. syllabus include nail anatomy, disorders and irregularities; theory and salesmanship as it relates to manicuring; manicuring practice; and arm, hand, and foot massage. Students must complete 300 hours in an approved beauty school or college before applying to the State Board of Cosmetic Arts for examination. Cost: $232.
Vehicle Safety Inspection: 5-9 p.m. Monday and Tuesday, Sept. 11-12, RCC. Learn the proper procedures for conducting vehicle safety inspections and operating an inspection station. Cost: $81.
Central Sterile Processing: Sept. 9-Nov. 18 online, plus 9 a.m.-noon Friday, Sept. 8, and 9 a.m.-3 p.m. Saturdays, Sept. 9 and Nov. 11, RCC. Learn the primary responsibilities of a central sterile technician. The course includes practical applications of learned concepts and procedures. syllabus include preparation, storage, distribution of instruments, supplies and equipment, quality assurance and inventory management. Graduates will receive a certificate and may be eligible to apply for national certification. Cost: $192, but ask about the Eagle Train and Gain scholarship.
CPR: 9 a.m.-1 p.m. Friday, Sept. 8 or Saturday, Sept. 23, RCC. Learn to recognize an emergency or signs of heart attack, care for choking victims, rescue breathing and CPR. Introduces defibrillator. Course completion yields an American Heart Association/Basic Life Support recognition card valid for two years. Cost: $55. Info: 336-342-4261, ext. 2602.
eLearning CPR Skills: Sept. 11-15, RCC. Are you seeking an alternative to classroom training or wishing to renew an existing AHA certification? Schedule a practice session prior to taking the online cognitive portion of the certification test. Then, attend a hands-on skills practice and testing session with a certified AHA instructor. One-hour appointments can be made by calling 336-342-4261, ext. 2602. Cost: $35.
EMT Initial: 6-10 p.m. Tuesdays and Thursdays and 8 a.m.-5 p.m. every other Saturday, Sept. 26, 2023-Feb. 1, 2024, plus 8 a.m.-5 p.m. on Friday, Jan. 12, 2024, RCC and online coursework. Learn basic life support skills. Emergency Medical Technicians work for EMS, fire departments, hospitals, rescue squads, and physician offices. Class includes lecture, hands-on skills, and field clinical opportunities. Complete successfully to be eligible to sit for the N.C. or National Registry EMT exam. Cost: $258, but ask about the Eagle Train & Gain Scholarship.
First Aid: 2-5 p.m. Saturday, Sept. 23, RCC. Learn critical skills to respond to and manage an emergency in the first few minutes until EMS arrives. Learn how to treat bleeding, sprains, broken bones, shock and more. Course completion yields an American Heart Association First Aid recognition card valid for two years. Cost: $35. Info: 336-342-4261 ext. 2602.
Pharmacy Technician: 6-9 p.m. Tuesdays and Thursdays, Sept. 5-Dec. 14, RCC plus online coursework. This course will prepare you to take the Pharmacy Technician Certification Exam. You will learn the technical procedures for preparing and dispensing drugs in hospital and retail settings under supervision of a registered pharmacist. Cost: $192, but ask about the Eagle Train & Gain Scholarship.
Rokid Max now has a companion in Rokid Station running Google TV. Does this make the Rokid AR Max the best AR glasses in the market? Let’s find out.
The fledgling AR glasses market is gradually picking up pace. While AR glasses don’t offer a fully immersive experience like AR/MR headsets, they are instead designed for more practical purposes.
Rokid’s Max AR glasses are no different. Along with the new Rokid Station, the world’s first Google-certified Android TV Device for AR glasses, these glasses offer a unique perspective on personal digital entertainment.
Article continues after ad
While the Rokid Max AR glasses look like slightly bulky sunglasses, they come with Sony’s micro OLED displays, which project a 215-inch virtual screen, mimicking a theatre-like effect.
The Rokid Max uses “birdbath” optic technology, where the screen is pointed downwards, and we see a reflection. Since you’re not staring at a bright display close to your eyes, it is specifically helpful in consuming content for a longer duration.
We’ve had a chance to explore the Rokid Max glasses and Rokid station for a few weeks, and here’s what we feel about them.
Article continues after ad
Included in the box: nose pad, protective case, detachable cables, glass cover
The Rokid Max AR glasses look like a pair of fancy sunglasses, only thicker and bulkier than most. Even though they look heavy, they’re surprisingly light, and a few minutes into a movie or a show, you won’t even realize you’re wearing them.
That said, if you’re wearing these glasses in public, you might get odd looks from people trying to decipher what they are precisely. In any case, they raise curiosity, and we’ve had a few inquisitive souls approaching us and left awe-struck when we wore them while enjoying a movie on a train journey.
Article continues after ad
These priceless interactions make you realize that AR glasses have a large market to capture, and it just needs one big product to really bring them to the mainstream.
Coming back to the glasses, the Rokid Max is nothing short of an engineering marvel when you consider that the entire system is housed within the glasses. The down-firing display sits right above the eyes. There are two physical diopters to adjust visuals for people who wear prescription glasses. We used this feature every time and couldn’t be more thankful to Rokid for making it easy for bespectacled people. These dials can help you adjust visuals between 0.00D to -6.00D.
Article continues after ad
The right arm has physical controls for brightness and volume apart from down-firing speakers. A wear sensor in the right arm helps pause the content automatically when the glasses are removed. Dual microphones are located on top of both arms. The USB-C connectivity port is located at the end of the left arm, which is a great design choice considering the wire goes behind your ear and connects to the Rokid Station, your laptop, a phone, or a gaming console.
The overall build quality of the glasses is premium, and despite so much tech that goes in, it isn’t heavy nor has any part that feels flimsy.
Article continues after ad
The Rokid Station is a compact multimedia box that is designed in a way that it can easily slide into your pocket. It’s slightly more significant than your palm but has smooth curves making it extremely easy to operate.
It resembles a remote control of a modern smart TV with six front buttons, a power button, the charging indicator LEDs on the right, and volume control on the left. These buttons are easily reachable and have tactile feedback when pressed.
Article continues after ad
Thanks to its size, the Rokid Station is easy to operate when you’re streaming content on the glasses, and you are not required to take off the glasses often to ensure the correct button is pressed.
It has a couple of ports at the bottom. A USB Type-C port can charge the 5000 mAh battery on the Rokid Station, while the Micro HDMI port can cast content to the glasses. Thanks to a separate charging port, you can keep using the Rokid Station while charging.
Article continues after ad
The Rokid Max sports a Sony Micro OLED display that projects images downwards which is reflected in front of your eyes. You can watch content in AR mode, where the content is projected as a layer on top of what you see around you. Additionally, you can slap on a glass cover that offers a pitch-black environment for an unobstructed viewing experience. In both cases, the content consumption experience is unparalleled.
The massive 215-inch screen that you see is vibrant and colorful, mimicking a theatre-like experience. However, we noticed some aggressive chromatic aberration on the corners of the displayed content when we used the glass cover. It’s nitpicking, however, but it’s more pronounced while viewing darker scenes.
Article continues after ad
The Rokid Max AR glasses can cast content from almost anywhere. It can also be used as a personal multimedia device if paired with the Rokid Station and has an AR mode. The AR mode, however, only works with a handful of devices. The list of these devices can be found here.
While a compatible smartphone powers AR Mode, we had many phones to try this feature. Unfortunately, the newly launched and super powerful RedMagic 8S Pro wasn’t supported, but thankfully, we had a bunch of Samsung flagship devices to test this out.
Article continues after ad
The AR experience needs to be improved, and the bundled app needs some work. You also have an app store to download VR games and apps. We tried downloading a few but could only make a handful of them actually work.
Subscribe to our newsletter for the latest updates on Esports, Gaming and more.
The bundled Rokid app is only helpful when setting up the glasses. However, once you connect the glasses to a compatible smartphone, you can unlock AR experiences. The app could be much better and easy to navigate. Once you tap on the AR Space tab; you have three options – Recommended, My App, and AR mode.
Article continues after ad
The recommended tab takes you to a list of AR apps and games you can download, My app shows the apps you’ve downloaded, and AR mode allows you to interact with the installed apps. The AR Mode again has its App repository, which strangely asks you to go to the “Mobile phone’s app store to get the app.”
If you download the apps/games using the Recommended tab, it downloads the apk file directly on the phone, which seems fishy. When we tried playing Reflex Unit 2, we got an alert saying it was an evaluation version, and after multiple attempts, the game didn’t start at all, although we could hear in-game music and had game controls showing up on the phone. Another app also offered a similar experience.
Article continues after ad
Besides these, you can view your content, like images and videos, in the 3D mode and watch content in 2D, LR3D, and UD3D settings. The VR settings here include 2D, VR180, and VR 360. However, there is little that you can do as the app controls need to be improved.
The Rokid Station is a portable Android TV box on Android OS 12. It is straightforward to set up and even comes with its Bluetooth-powered remote. Oddly, we felt the remote could have been avoided as we were only required to use it once while setting the Rokid Station. After the setup, it’s just lying in the box. However, it might be helpful to plug the Rokid Station into a standalone display.
Article continues after ad
Since it runs the official version of Android TV 12, you can visit the Play Store, download your favorite streaming app, and even watch YouTube on a large screen. Oddly, it doesn’t have Netflix, and we’re unsure why it is missing. That said, you can always side-load apps if you want.
The Rokid Station comes with a 5000 mAh battery lasting around five hours. If you lower the brightness on the glasses, you might get some more juice, but why compromise when you can charge the Rokid Station using a power bank or directly as well?
Article continues after ad
It can also double up as an emergency power backup if your phone runs out of juice or can help you transfer files from one device to another.
Watching movies, web series, and documentaries using the Rokid Max plugged into the Rokid station offers a perfect personal entertainment setup. The audio from the glasses is beamed towards the ear and is loud enough so that it doesn’t disturb others in the room.
If you want a more immersive listening experience, you can always connect a pair of wireless earbuds. Since the Glasses do not have any battery or computing prowess, you’ll have to pair the buds with the Rokid Station. The process is as simple as pairing any Bluetooth device with your Android-powered smart TV.
Article continues after ad
You can even download movies on your preferred platform, like Prime Video, or transfer content to the Rokid Station for consuming content on the go.
A person with myopia or far-sightedness can easily tweak the vision using adjustable diopters, making removing your prescription glasses and consuming content easy. While this enables you to use the Rokid Max for content, it doesn’t help you interact with other stuff around you. If your degree of myopia exceeds the 0.00D to -6.00 diopter adjustment, you can buy Rokid Max Lens Inserts from the official website.
Article continues after ad
You can use the Rokid Max and Station combo to play Android games on the Play Store. All you need is a wireless controller that you can pair with the Station, and you’re good to go.
You can connect the Rokid Max AR Glasses with your gaming PC/laptop or Steam Deck for an immersive gaming session. The 120 Hz refresh rate can make your gameplay even smoother if you have the rig to run it.
Article continues after ad
We played games like Forza Horizon 5, Fifa, Call of Duty, and others, on our Windows gaming laptop, and the overall experience was nothing short of amazing. Playing games on such a massive screen is highly immersive and one of a kind.
Since the glasses do not cover the eyes fully, you might still get distracted by stuff (especially pets) moving close to you. We’d suggest using the plastic cover to get a blacked-out background for a better visual experience.
Article continues after ad
The Rokid Station and the Rokid Max AR glasses are the perfect partners. Together they offer a great multimedia experience. Be it watching games in the room when your partner or kids are asleep or watching your favorite web series while preparing coffee in the kitchen, it’s almost perfect.
The glasses can be used with your PC or laptop for gaming and work, proving an ideal companion. However, the AR features have let us down and need improvement.
Article continues after ad
While there are many use cases for these glasses, and if you have a compelling one, this combo could be a great investment, at least for your entertainment.
That said, the only loose end here is in the AR experience, and if that is improved, the Rokid Max AR will be too compelling a device to ignore.
We may earn a small affiliate commission if you click on a product link on this page.
Samsung makes some incredibly expensive smartphones, but its cheaper A-range, including the mid-range Galaxy A54 5G and A34 5G promise to combine the latest technology with more affordable prices.
Up against it is Google with the latest in its own popular 'a' range – the mid-market Pixel 7a – released at the same price as Samsung's highest spec A-series phone: the A54 5G.
Which of these powerful yet well priced handsets comes out on top, and are you better off saving with the cheaper A34? Read on to find out.
Find out all you need to know about buying your next smartphone in our guide to the best smartphones to buy in 2023
On paper, the Google Pixel 7a doesn't look very different from Google's flagship Pixel 7. This means you'll get some top-end features with the 7a, but you'll pay a bit more for these with it starting at £50 more than last year's model: the Google Pixel 6a – which incidentally is now available at a significant discount.
The Pixel 7a is equipped with the the Google Tensor G2 chipset as in the Pixel 7 for practically the same performance. Its OLED screen is only slightly smaller at 6.1-inches and has the same 90Hz refresh rate - a rate that's high across all smartphones.
The cameras are where the Google Pixel 7a can really be separated from the Pixel 7, but the specs are far from bad. The dual-camera system has an upgraded ultra-wide camera to capture more in your shots.
Unlike most smartphones, it's only available with 128GB of storage, but you can pay for extra Cloud storage if you need more. Like all the phones in the Pixel 7 series, it gets five years of security updates from its launch.
If you're thinking of opting for last year's Pixel 6a instead, you won't get the upgraded cameras and it runs on an older processor. But it has a high-resolution display and you can film in 4K.
Read what happened when we put all it's features to the test in our Google Pixel 7a review, check the best contract deals, or Sim-free prices of the 7a and 6a below.
We've rounded up the best mobile phone deals for this month so you don't have to.
The Samsung Galaxy A-range claims to do a lot, including giving you a bright screen that moves smoothly when browsing, (somewhat) versatile cameras, and a long battery life. There are multiple phones in the range, ranging from around £130 to £450. The general rule is that the bigger the number after the 'A', the newer and more advanced the phone is.
Sometimes Samsung releases the same models without 5G connectivity that are cheaper, so check the specs before you buy.
As Samsung's top phone in the A series, it's the most expensive on offer but you'll also get the most features. This includes the more powerful Exynos chip not found on cheaper A phones.
On the front is a 6.4-inch OLED display with 120 Hz refresh rate, which is the same refresh rate you find on flagship phones to stop screens jittering. HDR10 content can be rendered, which should mean the screen is even clearer. It has three high pixel-resolution lenses on the back of the phone with optical image stabilisation, designed to counteract shaky hands when taking pictures and videos. It's a nice surprise to see a high pixel-resolution lens on the front, too.
It supports fast-charging up to 25 watts. It's guaranteed security updates for five years from its launch date (not the day you buy it), so you can choose to keep it for much longer than a two-year contract. With IP67 certification, it should be dust-resistant and survive 30 minutes in 1-metre-deep water.
Buy it sim-free below, compare contract deals or read our Samsung Galaxy A54 5G review to get the full scoop.
The Samsung Galaxy A34 5G is cheaper than the A54 5G, but you still get three rear camera lenses, 25W fast-charging, and it's one of the cheapest phones on the market to get five years of security support from launch.
It has a slightly bigger OLED screen than the A54 5G, standing at 6.6-inches, but with the same pixel-resolution. It's another A-series phone with IP67 IP certification.
Some of the compromises include a lower-spec processor, a weaker front-facing camera, and you get a plastic backing instead of the A54's Gorilla Glass 5 back.
Find out if you can save money without compromising too much on your next Samsung phone in our Samsung Galaxy A34 5G review. Or compare contract deals or sim-free prices below.
Shopping in the mid-range market isn't as hit and miss as it used to be, but you'll still want to know your phone will perform, and stand the test of time.
We independently test more than 65 smartphones every year to find the very best models on the market from £100 to over £1,500.We only give our Best Buy recommendation to the very best models on the market, regardless of their price. Great Value mobile phones might not have the most premium displays or cameras, but they give you a lot for your money.
Behind our scores are more than 40 individual tests. We find out if the multiple lenses on flagship phones with high pixel-resolution and multiple camera modes are really worth the extra money compared to the performance of mid-range models. We do our best to scratch the screen to see if the most premium materials are resistant to everyday stresses, and find out if having an ultra-bright screen really makes a difference to every day use. We also ask key questions like: what will the flagship's upgraded processor really give you over its mid-range cousin? And does premium power come at the cost of a rubbish battery life?
Find out more about how we test mobile phones or head over to our mobile phone reviews to find the perfect phone for you at the right price.
News, deals and stuff the manuals don't tell you. Sign up for our free, monthly Tech newsletter.