Proponents claim the stones can promote health and well-being. janiecbros/Getty Images
Proponents claim the stones can promote health and well-being. janiecbros/Getty Images
As New York City mayor Eric Adams attends ribbon cuttings, marches in parades and bulldozes dirt bikes, he wears an energy stone bracelet that his supporters gave him. In a latest interview, Adams discussed his belief that New York City has a “special energy” because it sits atop a store of rare gems and stones – the so-called “Manhattan schist,” which is over 450 million years old and contains over 100 minerals.
Adams isn’t the only one imbuing rocks with metaphysical significance. During the first year of the pandemic, the crystal industry boomed, with customers hoping the gems might relieve their anxiety.
Some people might be confused about the allure of these stones. But crystal enthusiasts aren’t deviants. Current ideas about crystals come from a larger tradition called “metaphysical religion” that has always been part of the American spiritual landscape.
For centuries, people have attributed special properties to crystals. Scientist Carl Sagan, in his book “The Demon-Haunted World,” traces their modern popularity to a series of books written in the 1980s by Katrina Raphaell, who founded The Crystal Academy of Advanced Healing Arts in 1986.
Crystals aren’t just eye-catching stones. Quartz is used in electronics because it possesses piezoelectric properties that cause it to release an electric charge when compressed. But, as skeptics are quick to point out, there is no evidence crystals can bring health, prosperity or any of the other properties that crystal enthusiasts may attribute to them.
Metaphysical religion includes modern New Age movements, a nebulous milieu of alternative spiritual beliefs and practices, such as synchronicity or psychic abilities. Older traditions like Mesmerism, the idea that humans beings emit magnetic energy that can be used for healing, and Spiritualism, the belief that mediums can communicate with the dead, also fall under the metaphysical umbrella.
Albanese ascribes four characteristics to metaphysical traditions: a preoccupation with the mind and its powers; “correspondences,” or the idea of hidden connections between things; a tendency to think in terms of energy and movement; and a yearning for salvation understood as “solace, comfort, therapy, and healing.”
Metaphysical ideas about crystals exhibit each of these characteristics.
While crystals are physical objects, not thoughts, many crystal enthusiasts recommend “cleansing” and “charging” crystals through visualization and other meditative techniques. So the mind plays a key role in crystal spirituality, as it does in other forms of metaphysical religion.
Correspondence refers to the belief found in many occult traditions that ordinary things possess secret qualities or connections to other things. A classic example is astrology, which postulates a correspondence between one’s birthday and certain personality traits. Metaphysical claims about crystals also reflect a belief in correspondences. For example, Colleen McCann, a self-described shaman affiliated with the crystal purveyor Goop, described the positive qualities of different crystals: bloodstones promote good health, rose quartzes help with love, and pink mangano calcites are good for sleep.
Modern crystal enthusiasts often use words like “energy” and “vibrations” that present their ideas in a scientific register. When enthusiasts talk about the energy of crystals – like Eric Adams did – they really mean that it exerts influence within a certain proximity. This is the principle behind crystal water bottles that can be used to “charge” water with “vibrational energy.”
Stripped of scientific language, the logic of energy and vibrations is another form of what anthropologist James Frazer called “contagious magic” found in many cultures, where simply placing one thing next to another is believed to cause an effect.
Finally, metaphysical religion tends to focus on solving problems in this life rather than the hereafter. This includes health and prosperity, but also emotional growth and well-being. Crystal spirituality is certainly centered around these worldly goals.
This is a big distinction from traditions like Christianity that emphasize salvation in heaven. It is also a factor in why metaphysical ideas are stigmatized despite their popularity.
Protestant Christianity, with its emphasis on “sola fides” – faith alone – has historically dismissed many forms of material religion, or objects with religious significance, as superstition. So in a culture shaped by its historically Protestant majority, some Americans may be predisposed to look at crystal spirituality as foolish, greedy or even blasphemous.
But while claims about the hidden properties of crystals lack scientific validation, so do many of the claims of Christianity and other mainstream religions.
From a historical perspective, Adams’ ideas about crystals don’t make him an outlier. As a scholar of religious studies, I see him as a normal part of the American religious landscape.
Joseph P. Laycock does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Whether an academic researcher or a casual hobbyist, the microscope is an essential part of nearly every scientist’s toolkit. While all microscopes let you examine small samples with incredible detail, digital models make the process even more convenient by offering high-tech features and USB connectivity.
Although it is their primary function, there is more to these devices than simple magnification. The Plugable USB 2.0 Digital Microscope is the top pick as it has a flexible arm and adjustable brightness settings.
When shopping for a digital microscope, perhaps the most crucial detail to consider is the device’s magnification power. Most microscopes have at least 40x magnification, but you can find certain high-end models with 1,000x magnification or higher. This means that you are viewing samples 1,000 times closer than you would be able to with the naked eye.
Digital microscopes with that much power can discern fine details on the cellular level, but this won’t matter much if your device doesn’t have sufficient resolution.
The resolution determines how well-defined your trial will be under the microscope. This is typically measured in megapixels and directly corresponds with the size of your computer’s monitor. Devices on the low end usually have a resolution of around 0.4 to 1 megapixels, which will appear normal on a 640 by 480 screen but may look blurry on anything larger. Other digital microscopes may boast a resolution of 2 or more megapixels, providing an image that’s large and detailed.
Nearly every digital microscope will have some kind of light source built into its design. Most devices use light-emitting diodes to illuminate your trial and make it easier to inspect. Look for a device that offers adjustable brightness levels so you can have complete control over the lighting.
Look at the product details closely to make sure that the microscope you choose will be compatible with your laptop or device. Keep in mind that most digital microscopes connect via USB cable. If you use a newer computer that lacks a USB port, opt for a microscope with wireless or Bluetooth capabilities. Similarly, if your microscope uses specific software, make sure that you have all of the proper system requirements to run it.
Some high-end digital microscopes have Wi-Fi or other wireless capabilities. This makes the device much more portable and gives you a better reach without the clutter of wires.
Make sure that your microscope comes with a free software download. These programs will allow you to display the image on your computer screen and may come with essential tools like image capture, video recording and even effects like sharpening or color correction.
If you buy a digital microscope for portability, look for a model with a built-in display screen. These screens are much smaller than computer monitors, but they let you take your digital microscope anywhere without the need for additional devices.
Depending on the objects that you intend to examine, a microscope with an adjustable stand can be extremely helpful. In addition to vertical height adjustments, many of these devices have a flexible arm that can be bent to any position imaginable. Other digital microscopes are completely handheld, eliminating the need for an arm altogether.
The cost of a digital microscope can vary widely depending on its power and any included features. A simple device can usually be purchased for around $40-$100, while a microscope designed for professional labs can cost $1,000 or more.
A. While you can find microscopes that offer 1,000x magnification or more, a device with 400x magnification is more than enough for most purposes.
A. Digital microscopes can provide a great educational opportunity for curious kids, but make sure that you choose a durable, handheld device that’s not overly complicated.
Plugable USB 2.0 Digital Microscope
What you need to know: This popular digital microscope is extremely versatile and easy to install.
What you’ll love: Simply plug this microscope into your USB port, and you’ll be ready to examine your samples. This device provides up to 250x magnification with 2 megapixels for crystal clear images. The brightness is adjustable, and you can move the arm in any direction.
What you should consider: This microscope can be challenging to focus, depending on the object you’re inspecting.
Where to buy: Sold by Amazon
Jiusion Endoscope and Digital Microscope
What you need to know: This compact digital microscope is budget-friendly with solid metal construction.
What you’ll love: This affordable device can connect to your personal computer, Mac or smartphone and is powerful enough for 1,000x magnification. The microscope includes eight LEDs and a knob to adjust brightness and image focus.
What you should consider: While this microscope is compatible with Android phones, iPhone users will need to look elsewhere.
Where to buy: Sold by Amazon
Carson zOrb USB Digital Microscope
What you need to know: Perfect for students and aspiring academics, this simple microscope features a unique look and durable construction.
What you’ll love: This digital microscope has an ergonomic, handheld design, making it extremely portable. It can magnify up to 65x and can capture images and record video. It’s also compatible with both PC and Mac operating systems.
What you should consider: The included USB cable isn’t very long, and the magnification power is only suitable for casual users.
Where to buy: Sold by Amazon
Want to shop the best products at the best prices? Check out Daily Deals from BestReviews.
Sign up here to receive the BestReviews weekly newsletter for useful advice on new products and noteworthy deals.
Patrick Farmer writes for BestReviews. BestReviews has helped millions of consumers simplify their purchasing decisions, saving them time and money.
Copyright 2022 BestReviews, a Nexstar company. All rights reserved.
It may seem like technology advances year after year, as if by magic. But behind every incremental improvement and breakthrough revolution is a team of scientists and engineers hard at work.
UC Santa Barbara Professor Ben Mazin is developing precision optical sensors for telescopes and observatories. In a paper published in Physical Review Letters, he and his team improved the spectra resolution of their superconducting sensor, a major step in their ultimate goal: analyzing the composition of exoplanets.
“We were able to roughly double the spectral resolving power of our detectors,” said first author Nicholas Zobrist, a doctoral student in the Mazin Lab.
“This is the largest energy resolution increase we’ve ever seen,” added Mazin. “It opens up a whole new pathway to science goals that we couldn’t achieve before.”
The Mazin lab works with a type of sensor called an MKID. Most light detectors — like the CMOS sensor in a phone camera — are semiconductors based on silicon. These operate via the photo-electric effect: a photon strikes the sensor, knocking off an electron that can then be detected as a signal suitable for processing by a microprocessor.
An MKID uses a superconductor, in which electricity can flow with no resistance. In addition to zero resistance, these materials have other useful properties.
For instance, semiconductors have a gap energy that needs to be overcome to knock the electron out. The related gap energy in a superconductor is about 10,000 times less, so it can detect even faint signals.
What’s more, a single photon can knock many electrons off of a superconductor, as opposed to only one in a semiconductor. By measuring the number of mobile electrons, an MKID can actually determine the energy (or wavelength) of the incoming light. “And the energy of the photon, or its spectra, tells us a lot about the physics of what emitted that photon,” Mazin said.
The researchers had hit a limit as to how sensitive they could make these MKIDs. After much scrutiny, they discovered that energy was leaking from the superconductor into the sapphire crystal wafer that the device is made on. As a result, the signal appeared weaker than it truly was.
In typical electronics, current is carried by mobile electrons. But these have a tendency to interact with their surroundings, scattering and losing energy in what’s known as resistance. In a superconductor, two electrons will pair up — one spin up and one spin down — and this Cooper pair, as it’s called, is able to move about without resistance.
“It’s like a couple at a club,” Mazin explained. “You’ve got two people who pair up, and then they can move together through the crowd without any resistance. Whereas a single person stops to talk to everybody along the way, slowing them down.”
In a superconductor, all the electrons are paired up. “They’re all dancing together, moving around without interacting with other couples very much because they’re all gazing deeply into each other’s eyes.
“A photon hitting the sensor is like someone coming in and spilling a drink on one of the partners,” he said. “This breaks the couple up, causing one partner to stumble into other couples and create a disturbance.” This is the cascade of mobile electrons that the MKID measures.
But sometimes this happens at the edge of the dancefloor. The offended party stumbles out of the club without knocking into anyone else. Great for the rest of the dancers, but not for the scientists. If this happens in the MKID, then the light signal will seem weaker than it actually was.
Mazin, Zobrist and their co-authors discovered that a thin layer of the metal indium — placed between the superconducting sensor and the substrate — drastically reduced the energy leaking out of the sensor. The indium essentially acted like a fence around the dancefloor, keeping the jostled dancers in the room and interacting with the rest of the crowd.
They chose indium because it is also a superconductor at the temperatures at which the MKID will operate, and adjacent superconductors tend to cooperate if they are thin. The metal did present a challenge to the team, though. Indium is softer than lead, so it has a tendency to clump up. That’s not great for making the thin, uniform layer the researchers needed.
But their time and effort paid off. The technique cut down the wavelength measurement uncertainty from 10% to 5%, the study reports. For example, photons with a wavelength of 1,000 nanometers can now be measured to a precision of 50 nm with this system.
“This has real implications for the science we can do, because we can better resolve the spectra of the objects that we’re looking at,” Mazin said.
Different phenomena emit photons with specific spectra (or wavelengths), and different molecules absorb photons of different wavelengths. Using this light, scientists can use spectroscopy to identify the composition of objects both nearby and across the entire visible universe.
Mazin is particularly interested in applying these detectors to exoplanet science. Right now, scientists can only do spectroscopy for a tiny subset of exoplanets.
The planet needs to pass between its star and Earth, and it must have a thick atmosphere so that enough light passes through it for researchers to work with. Still, the signal to noise ratio is abysmal, especially for rocky planets, Mazin said.
With better MKIDs, scientists can use light reflected off the surface of a planet, rather than transmitted through its narrow atmosphere alone. This will soon be possible with the capabilities of the next generation of 30-meter telescopes.
The Mazin group is also experimenting with a completely different approach to the energy-loss issue. Although the results from this paper are impressive, Mazin said he believes the indium technique could be obsolete if his team is successful with this new endeavor. Either way, he added, the scientists are rapidly closing in on their goals.
When it was first introduced by Meta's Mark Zuckerberg last fall, there was skepticism in some corners about the metaverse, the systems of avatars and virtual worlds that Zuckerberg is building and which he says will be the next version of the internet.
Richard Kerris, who runs a team of a hundred people at chip giant Nvidia who work on building technology for the metaverse, known as Omniverse (more here), is not at all skeptical about that future world.
He is skeptical about one thing, though.
"The only thing I'm skeptical about is how people tend to talk about it," Kerris told ZDNet, on a latest trip through New York City to meet with developers.
"People are misinterpreting metaverse as a destination, a virtual world, a this or that," Kerris observed. "The Metaverse is not a place, it's the network for the next version of the Web.
"Just replace the word metaverse with the word network, it'll start to sink in."
The network, in the sense that Kerris uses it, is a kind of sinewy technology that will bind together rich media on many websites, especially 3D content.
Also: At CES 2022, Nvidia sets the stage for AI everywhere
"In much the same way the Web unified so many things […] the next generation of that Web, the core underlying principles of that will be 3D, and with that comes the challenge of making that ubiquitous between virtual worlds.
"The end result would be, in much the same way you can go from any device to any website without having to load something in — remember the old days -- What browser do you have? What extension?, etc. — all that went away with HTML being ratified. When we can do that with 3D, it's going to be transformative."
No surprise being from Nvidia, which sells the vast majority of graphics chips (GPUs) to render 3D, Kerris made the point that, "We live in a 3D world; we think in 3D," but the Web is a 2D reality. "It's limited," he said, with islands of 3D rendering capabilities that never interconnect.
Also: Nvidia expands Omniverse with a new GPU, new collaborations
"The consistency of the connected worlds is what is the magic that's taking place," he said. "I can teleport from one world to another, and I don't have to describe it each time that I build it."
The analog to HTML for this new 3D ecosystem is something called USD, universal scene description. As ZDNet's Stephanie Condon has written, USD is an interchange framework invented by Pixar in 2012, which was released as open-source software in 2016, providing a common language for defining, packaging, assembling, and editing 3D data.
(Kerris, an Apple veteran, has something of a spiritual if not actual tie to Pixar, having worked at LucasFilm for several years in the early noughts. See more in his LinkedIn profile.)
USD is capable of describing numerous things in a 3D environment, from lighting to the physics behavior of falling objects.
In practice, Kerris imagines the Omniverse-enabled, USD-defined metaverse as a road trip where people hop from one 3D world to the next as effortlessly as browsing traditional sites. "I can go from a virtual factory to a virtual resort to a virtual conference room to a virtual design center, to whatever," said Kerris.
Also: Nvidia's new Omniverse tools will make it easier than ever to build virtual worlds
Within those environments, 3D rendering will allow people to move past the cumbersome sneakernet of file sharing. "And it allows a lot more capability in what I do," he said, offering the example of product designers.
"With metaverse, and ubiquitous plumbing for 3D, we'll be in that 3D environment at the same time, and rather than sharing a Web page, we can move around. You can look at something on this side of the product. I can be looking at something else, but it's like we're in the same room at the same time."
Nvidia, said Kerris, started down the path on USD six or seven years ago "because we simulate everything we build [at Nvidia] before we build it in the physical world," he said. Nvidia has peers in industry working on realizing technology, including Ericsson, which wants to simulate antennae. "They all want a reality simulation," he said of companies in the USD fold.
Using the technology, said Kerris, one can go much deeper into the realm of digital twins, simulations of products and structures that allow for intervention, experimentation, and observation.
"Until the advent of consistent plumbing, it was done in a representative mode," he said, such as an illustration of a building in Autodesk. "It wasn't true to reality. I couldn't show you exactly how it would be in a windstorm," which isn't good because, as he put it, "I want to be damn straight about stuff I'm building in the physical world."
The "core base of a situation that's true to reality," using USD, will allow designers to more accurately simulate, backward and forward, including things such as tensile strength.
"I'd love to have a house that's structurally sound before I design the marble finish," he observed. "If I'm building a digital twin of a house I'm building, it's layers of stuff on there, things for structural engineers, and polish that others are going to come in and finish." The important thing is knowing it's "true to reality" for materials and things holding the structure together, he said.
By making possible those richer interactions in 3D, Kerris said, "In the same way that the Web transformed businesses, and experiences, and communication, so will the metaverse do that, and in a more familiar environment, because we all work in 3D."
Different companies are contributing to USD in different ways. For example, Nvidia has worked with Apple to define what's called rigid body dynamics.
"And there's more to come," he said.
Nvidia has been developing the Omniverse tools as a "platform," what Kerris calls "the operating system for the metaverse."
"People can plug into it. They can build on top of it. They can connect to it. They can customize it — it's really at their disposal, much the same way an operating system is today."
The USD standard has come "quite far" in terms of adoption, Kerris said, with most 3D companies using it. "Every company in entertainment has a USD strategy today," he observed. "The CAD [computer-aided design] and mechanical engineering, it's coming. They either have plans or they are participating in helping to define what's necessary."
"HTML was the same way in early days," he said. It lacked support for video in early days, with third-party plugins such as Adobe Flash dominating before standards evolved.
Will digital twins ignite the world's imagination about the metaverse? It seems somewhat too industrial-focused, ZDNet observed.
Ordinary people will gain interest as they realize it is connectedness, not a single destination. "As they realize it's the next generation of the Web, I can visit a remote location without the need of a headset, or [without] installing specific browsers. That's one aspect," Kerris said. "In their everyday life, as we share photos today, you'll be able to share objects. You know, your kid comes home, and they made something and they'll be able to share it with the grandparents."
"It'll just become part of what you do, whether you're buying a piece of furniture for your house and you'll go into your phone. You'll sync with the home. You'll drop the furniture in. You'll walk around it — that's the thing people will take for granted, but it's the seamless connection."
The same for designing one's custom car finish, he offered. "You'll actually be connected to the factory making that car" to check out all the aspects of it.
"It's going to change everything," he said.
There will be multiplier effects, said Kerris, as digital twins allow for trialing multiple scenarios, such as with training robots.
"Today, they would plug a computer into that robot, and input it with information" to train the robot in one physical space, he said. In a digital twin environment, with a robot in the simulated room, "You can train not only one robot but hundreds," using "hundreds of scenarios the robot could encounter."
"Now, that robot is going to be thousands of times smarter than it would have been if you'd only trained it one time." Nvidia has, in fact, been pursuing that particular approach for many years by doing autonomous driving training of machine learning in simulated road environments.
Although autonomous driving hasn't reached its promised development, Kerris believes the approach is still sound. "I can build a digital twin of Palo Alto," the Silicon Valley town. "And I can have thousands of cars in that simulation, driving around, and I can use AI to apply every kind of simulation I can think of — a windstorm, a kid running out, chasing a ball, an oil slick, a dog — so that these cars in simulation are learning many thousands of times more scenarios than a physical car would."
Nvidia has been doing work, combining the simulated trials with real-world driving with car maker Mercedes for Level 5 autonomous driving, the most demanding level.
"The efficiency is pretty amazing," he said, meaning, how well the autonomous software handles the road scenarios. "By using synthetic data to train these cars, you have a higher degree of efficiency" when combining scenarios.
"I would much rather trust myself riding in a car trained in a simulated environment than [in] one trained in a physical environment." There still will be a role for the real-world data that comes from cars on the road.
As for the time frame for the vision, Kerris noted that "we are seeing it already in warehouses," which are rapidly adopting the robot-training regime. That includes Amazon, where a developer downloaded Omniverse and evangelized it within Amazon. The enterprise version of Omniverse, which is a subscription-based product, was taken up by Amazon for more extensive robot training.
Amazon currently is in production with the software for its pick-and-place robots.
"The beauty is they discovered by using synthetic data generation they were able to be more efficient with stuff rather than just rely on the camera" on the robot for object detection. Those cameras often would get tripped up by reflective packing tape on packages, Kerris said. Using synthetic Omniverse-generated data got around that limitation. That's one example of being more efficient in robotics, he said.
Consumers will probably feel the effects of such simulations in the results.
"There are a hundred thousand warehouses on the planet," Kerris said. "They are all looking at using robotics to be safer, more efficient, and to better utilize the space." People "may not be aware that's taking place, but they'll reap the benefits of it."
In some situations, consumers will "know, because they're getting things a lot faster than in past," he said. "Behind the curtain, things will be much more efficient than they were six months ago." The same goes for retailers such as Kroger, which is using Omniverse tools to generate synthetic data to plan how to get produce to consumers faster.
As for self-driving cars, "The presumption that all these cars will be autonomous today, it's a bit — it's not there yet," he conceded. "But will we have autonomous taxis, and things that will take us form here to there? Oh, yeah, that's easy."
But, "For a car that drives up to you and it will drive you to New Jersey autonomously? We have a little ways to go."
As for direct consumer experiences, "People will start to see the ability to experience locations," Kerris said. Leisure industry executives are interested, for example, in how to showroom a hotel room to consumers in advance of a trip in a way better than photos. "I'm going to allow you to teleport into the room, experience it, so your decision will be based on an immersive experience. Look at the window, see what my view is going to be," Kerris said.
The impact on education "is going to be huge," Kerris said. Today, physical location means some inner-city schools might not experience lavish field trips. "An inner-city school is not exactly going to have a field trip to do a safari in Africa," he mused. "I think that virtual worlds [that] are seamlessly connected can bring new opportunities by allowing everybody to have the same experience no matter what school they're in."
An avatar of researcher such as Jane Goodall could "inspire learning," he suggested. "Think about what that does for a student."
While emphasizing 3D, Kerris is not pushing virtual reality or augmented reality, the two technologies people tend to focus on. Those things are part of the picture, but 3D doesn't have to be with a headset on, he asserted.
For one thing, today's VR tools, such as VR videos on YouTube that use conventional VR headsets, have been quite limited, Kerris said. "It's not seamless; it's not easy; it's not like a website," he observed.
In addition to stints at Apple, Amazon, and LucasFilm, Kerris briefly ran marketing for headset developer Avegant. Those headsets were not VR. They were made to be private, immersive movie screens attached to your face using Texas Instruments DLP projection chips. The quality of the product, Kerris reflected, "was phenomenal," but it was too expensive to make, costing $800 at retail. And the fact that a laser would project onto the retina "scared everyone," he said. (Avegante is still in business, developing a technology called liquid crystal-on-silicon.)
What needs to happen is for today's disparate virtual environments to receive that sinewy tissue of USD and related technology. "They're all disconnected," said Kerris of today's proto-metaverse, such as Oculus Rift. "If they were just simple websites, where you could bop around and go experience it, the opportunity would be much greater."
Rather than having to have an Oculus headset, "If I could experience it with this being a window into that world," he said, holding up his smartphone, "chances are a lot higher I would go check it out."
Will USD make that happen?
"Yes. That's absolutely the goal of USD to unify 3D virtual worlds."
Still, showrooming hotel rooms doesn't sound like it will jumpstart things. When is the Tim Berners-Lee event that will make it all happen for consumers in a grassroots way?
"When did the Web become something that became ubiquitous with consumers?" he asked, rhetorically. "Well, it started with email, then I could send a picture, then, all of a sudden I could do video. It kind of evolved as it went along."
Kerris alluded to the early days of mobile websites on iPhone, when Steve Jobs first unveiled the technology in January 2007 onstage at Macworld, when Kerris was with Apple, and later, on a video chat via FaceTime,
"What was the transformative thing that allowed the Web to be in everybody's pocket? It's kind of like that," he said. "It almost happened when you didn't know it, and then people take it for granted."