Mark Zuckerberg sat across from me, controlling objects on a screen with small motions of his fingers. Taps, glides, pinches. On his wrist was a chunky band that looked like an experimental smartwatch: It's Meta's vision of our future interactions with AR, VR, computers and just about everything else.
"It'll work well for glasses…I think it'll actually work for everything. I think in the future, people will use this to control their phones and computers, and other stuff…you'll just have a little band around your wrist," Zuckerberg said, right before he demoed the neural wristband. His hand and finger movements seemed subtle, almost fidgety. Sometimes nearly invisible.
are just one part of Meta's strategy beyond VR, and these wristbands were among the tech I got to see and try during a first-ever visit to Meta's Reality Labs headquarters in Redmond, Washington. The trip was the first time Meta's invited journalists to visit its future tech research facility, located in a handful of nondescript office buildings far north of Facebook's Silicon Valley headquarters.
The last time I visited Redmond, I was trying Microsoft's. My trip to Meta was a similar experience. This time, I was demoing the , a headset that blends VR and AR together into one device and aims to kick off Zuckerberg's ambitions to a more work-focused metaverse strategy.
Meta's newestnews is focused on the Quest Pro, and also on new work partnerships with companies like Microsoft, Zoom, Autodesk and Accenture, targeting ways for Meta to maybe dovetail with Microsoft's mixed reality ambitions.
I also got to look at a handful of experimental research projects that aren't anywhere near ready for everyday use but show glimpses of exactly what Meta's shooting for next. These far-off projects, and a more-expensive Quest Pro headset, come at a strange time for Meta, a company that's already spent billions investing in the, and whose most popular VR headset, the , still has less than 20 million devices sold. It feels like the future isn't fully here yet, but companies like Meta are ready for it to be.
I experienced a number of mind-bending demos with a handful of other invited journalists. It felt like I was exploring Willy Wonka's chocolate factory. But I also came away with the message that, while the Quest Pro looks like the beginning of a new direction for Meta's hardware, it's nowhere close to the end goal.
"Co-adaptive learning," Michael Abrash, Meta's Reality Labs' chief scientist, told me over and over again. He was describing the wristbands that Metamultiple times since in 2019. It's a hard concept to fully absorb, but Meta's demo, shown by a couple of trained researchers, gave me some idea of it. Wearing the bulky wristbands wired to computers, the wearers moved their fingers to make a cartoon character swipe back and forth in an endless-running game. Then, their movements seemed to stop. They became so subtle that their hands barely twitched, and still they played the game. The wristbands use EMG, or electromyography (the electrical measurement of muscles) to measure tiny muscle impulses.
A feedback-based training process gradually allowed the wearers to start shrinking down their actions, eventually using only a single motor neuron, according to Thomas Reardon, Reality Labs' Director of Neuromotor Interfaces and former CEO of CTRL-Labs, who talked us through the demos in Redmond. The end result looks a little like mind reading, but it's done by subtly measuring electrical impulses showing an intent to move.
When Zuckerberg demonstrated the wristband, he used a similar set of subtle motions, though they were more visible. The wristband's controls feel similar to a touch-based trackpad or air mouse, able to identify pressure-based pinches, swipes and gestures.
"In the long run, we're going to want to have an interface that is as natural and intuitive as dealing with the physical world," Abrash said, describing where EMG and neural input tech is aiming.
Typing isn't on the table yet. According to Zuckerberg, it would require more bandwidth to get to that speed and fidelity: "Right now the bit rate is below what you would get for typing quickly, but the first thing is just getting it to work right." The goal, at some point, is to make the controls do more. Meta sees this tech as truly arriving in maybe five to six years, which feels like an eternity. But it will likely line up, should that timeframe hold, with where Meta sees its finalized AR glasses becoming available.
Zuckerberg says the wristbands are key for glasses, since we won't want to carry controllers around, and voice and hand tracking aren't good enough. But eventually he plans to make these types of controls work for any device at all, VR or otherwise.
The controls look like they'll involve an entirely different type of input language, one that might have similarities to existing controls on phones or VR controllers, but which will adapt over time to a person's behavior. It seems like it would take a while to learn to use.
"Most people are going to know a whole lot about how to interact in the world, how to move their bodies," Reardon said to me. "They're going to understand simple systems like letters. So let's meet them there, and then do this thing, this pretty deep idea called co-adaptation, in which a person and a machine are learning together down this path towards what we would call a pure neural interface versus a neural motor interface, which blends neural decoding with motor decoding. Rather than saying there's a new language, I'd say the language evolves between machine and person, but it starts with what people do today."
"The co-adaptation thing is a really profound point," Zuckerberg added. "You don't co-adapt with your physical keyboard. There's a little bit of that in mobile keyboards, where you can misspell stuff and it predicts [your word], but this is a lot more."
I didn't get to wear or try the neural input wristband myself, but I got to watch others using them. Years ago at CES, I did get to briefly try a different type of wrist-worn neural input device for myself, and I got a sense of how technologies like this actually work. It's different from the head-worn device(since acquired by Snap) I tried a year ago, which measured eye movement using brain signals.
The people using the Meta wristbands seemed to make their movements easily, but these were basic swiping game controls. How would it work for more mission-critical everyday use in everyday AR glasses? Meta's not there yet: According to Zuckerberg, the goal for now is to just get the tech to work, and show how adaptive learning could eventually shrink down response movements. It may be a while before we see this tech in action on any everyday device, but I wonder how Meta could apply the principles to machine learning-assisted types of controls that aren't neural input-based. Could we see refined controllers or hand tracking combinations arrive before this? Hard to tell. But these bands are a far-off bet at the moment, not an around-the-corner possibility.
A second set of demos I tried, demonstrating next-generation spatial audio, replicated research Meta talked about back in 2020 -- and which it originally planned on showing off in-person before COVID-19 hit.is already widely used in VR headsets, game consoles and PCs, and on a variety of everyday earbuds such as AirPods. What Meta's trying to do is not just have audio that seems like it's coming from various directions, but to project that audio to make it seem like it's literally coming from your physical room space.
A visit to the labs' soundproof anechoic chamber -- a suspended room with foam walls that block reflections of sound waves -- showed us an array of speakers designed to help study how sounds travel to individual ears, and to explore how sounds move in physical spaces. The two demos we tried after that showed how ghostly-real the sounds can feel.
One, where I sat down in a crowded room, involved me wearing microphones in my ears while the project leads moved around me, playing instruments and making noises at different distances. After 40 seconds of recording, the project leads played back the audio to me with over-ear headphones… and parts of it sounded exactly like someone was moving around the room near me. What made it convincing, I think, were the audio echoes: The sense that the movement was reverberating in the room space.
A second demo had me wearing a 3D spatial-trackable pair of headphones in a room with four speakers. I was asked to identify whether music I heard was coming from the speakers, or my ears. I failed. The music playback flawlessly seemed to project out, and I had to take off the headphones to confirm which was which as I walked around.
According to Michael Abrash's comments back in 2020, this tech isn't far away from becoming a reality as neural wristbands. Meta's plans are to have phone cameras eventually be able to help tune personal 3D audio, much like Apple just added to its newest AirPods, but with the added benefit of realistic room-mapping. Meta's goal is to have AR projections eventually sound convincingly present in any space: It's a goal that makes sense. A world of holographic objects will need to feel anchored in reality. Although, if future virtual objects sound as convincingly real as my demos were, it might become hard to distinguish real sounds from virtual, which brings up a whole bunch of other existential concerns.
I'm in a dark space, standing across from a seemingly candle-lit and very real face of someone who was in Meta's Pittsburgh Reality Labs Research offices, wearing a specially built face-tracking VR headset. I'm experiencing Codec Avatars 2.0, a vision of how realistic avatars in metaverses could get.
How real? Quite real. It was uncanny: I stood close and looked at the lip movement, his eyes, his smiles and frowns. It felt almost like talking with a super-real PlayStation 5 game character, then realizing over and over again this is a real-time conversation with a real person, in avatar form.
I wondered how good or limited face tracking could be: After all, my early Quest Pro demos using face tracking showed limits. I asked Jason, the person whose avatar I was next to, to make various expressions, which he did. He said I was a bit of a close-talker, which made me laugh. The intimate setting felt like I had to get close and talk, like we were in a cave or a dimly lit bar. I guess it's that real. Eventually, the realism started to feel good enough that I started assuming I was having a real conversation, albeit one with a bit of uncanny valley around the edges. It felt like I was in my own living video game cutscene.
Meta doesn't see this coming into play for everyday headsets any time soon. First of all, standalone VR headsets are limited in their processing power, and the more avatars you have in a room, the more graphics get taxed. Also, the tracking tech isn't available for everyone yet.
A more dialed-down version was in my second demo, which showed an avatar that had been created by a face scan using a phone camera using a new technology called Instant Codec Avatars. The face looked better than most scans I'd ever made myself. But I felt like I was talking with a frozen and only slightly moving head. The end result was less fluid than the cartoony Pixar-like avatars Meta uses right now.
One final demo showed a full-body avatar (legs, too!) that wasn't live or interactive. It was a premade 3D scan of an actor using a special room with an array of cameras. The demo focused on digital clothes that could realistically be draped over the avatar. The result looked good up close, but similar to a realistic video game. It seems like a test drive for how digital possessions could someday be sold in the metaverse, but this isn't something that would work on any headset currently available.
Like a volunteer in a magic show, I was asked to remove one of my shoes for a 3D scanning experiment. My shoe ended up on a table, where it was scanned with a phone camera -- no lidar needed. About half an hour later, I got to look at my own shoe in AR and VR. 3D scanning, like spatial audio, is already widespread, with lots of companies focused on importing 3D assets into VR and AR. Meta's research is aiming for better results on a variety of phone cameras, using a technology called neural radiance fields. Another demo showed a whole extra level of fidelity.
A couple of prescanned objects, which apparently took hours to prepare, captured the light patterns of complex 3D objects. The results -- which showed furry, spiky, fine-detailed objects including a teddy bear and a couple of cacti -- looked seriously impressive on a VR headset. The curly fur didn't seem to melt or matte together like most 3D scans; instead it was fluffy, seemingly without angles. The cactus spines spread out in fine spiky threads.
Of all the demos I tried at the Reality Labs, this was maybe the least wowing. But that's only because there are already, through various processes, lots of impressive 3D-scanned and rendered experiences in AR and VR. It's not clear how instant or easy it could be to achieve Meta's research examples in everyday use, making it hard to judge how effective the function is. For sure, if scanning objects into virtual, file-compatible versions of themselves gets easier, it'll be key for any company's metaverse ambitions. Tons of businesses are already aiming to sell virtual goods online, and the next step is letting anyone easily do it for their own stuff. Again, this is already possible on phones, but it doesn't look as good…yet.
The bigger question on my mind, as my day ended at Meta's facilities and I called a Lyft from the parking lot, was what it all added up to. Meta has a brand-new Quest Pro headset, which is the bleeding-edge device for mixing AR and VR together, and which offers new possibilities for avatar control with face tracking.
The rest of the future remains a series of question marks. Where Meta wants to spread out its metaverse ambitions is a series of roads that are still unpaved. Neural inputs, AR glasses, blends of virtual and real sounds, objects and experiences? These could still be years away.
In a year where Meta has seen its revenue drop while making sizable bets on the metaverse, despite inflation and an economic downturn, are these projects all going to be fulfilled? How long can Meta's long-game metaverse visions be sustained?
Abrash talks to us once more as we gather for a moment before the day's end, bringing back a connecting theme, that immersive computing will be a true revolution, eventually. Earlier on, we had stopped at a wall full of VR and AR headsets, a trophy case of all the experimental prototypes Meta has worked on. We saw mixed reality ones, ones with displays designed to show eyes on the outside, and ones so small they're meant to be the dream VR equivalent of sunglasses.
It made me think of the long road of phone design experimentation before smartphones became mainstream. Clearly, the metaverse future is still a work in progress. While big things may be happening now, the true "smartphones" of the AR and VR future might not be around for a long while to come.
"The thing I'm very sure of is, if we go out 20 years, this will be how we're interacting," Abrash said in front of the headset wall. "It's going to be something that does things in ways we could never do before. The real problem with it is, it's very, very hard to do this."
In this piece, we will take a look at the 11 best virtual reality stocks to buy. For more stocks, head on over to 5 Best Virtual Reality Stocks to Buy.
Advances in semiconductor fabrication and manufacturing have enabled chip makers to squeeze unthinkable amounts of computing power into pieces of silicon the size of a human thumbnail. This growth has also spurred industries of its own, and one such sector is the virtual reality segment of the broader technology industry.
Virtual reality, as the name suggests, refers to technologies that create an artificial representation of reality for users to immerse themselves into - whether for entertainment or productivity needs. This is achieved through headsets, processors, and software, with different companies providing different technologies for the processes.
The virtual reality industry was estimated to be worth $4.4 billion in 2020, and through a massive compounded annual growth rate (CAGR) of 44.8%, the segment can be worth a whopping $84 billion in 2029, according to a research report from Fortune Business Insights. Driving this growth will be several factors, such as the demand for virtual training platforms in industries, that let firms prepare their employees for complex tasks without investing in physical infrastructure. This allows companies in industries such as automobile manufacturing to reduce worker injuries and conduct factory personnel training safely.
Another research report, this time from Valuates Report, analyzes both the virtual and augmented reality markets. Augment reality is a subset of virtual reality that serves as a 'bolt on' to existing reality instead of rendering a completely new environment. This research firm believes that the markets were worth $14 billion in 2020 and through a strong CAGR of 41%, they will grow to sit at $454 billion by the end of 2030.
Therefore, looking at these estimates, it's clear that virtual reality has a bright future ahead of it, despite the bloodbath in technology stocks this year. Today's piece will look at the key players in the industry and some well known firms in the list are Advanced Micro Devices, Inc. (NASDAQ:AMD), Meta Platforms, Inc. (NASDAQ:META), and Microsoft Corporation (NASDAQ:MSFT).
Photo by mahdis mousavi on Unsplash
We took a look at the virtual reality industry and current trends to pick out which firms are currently offering creative products and services in the industry. We preferred companies with strong financial performance, technological advantages, and relevance to current industry dynamics. These stocks are ranked via hedge fund sentiment gathered through Insider Monkey's 895 fund survey for this year's second quarter.
Number of Hedge Fund Holders: 2
Tencent Holdings Limited (OTCMKTS:TCEHY) is a Chinese conglomerate that owns several companies including the video game developer Epic Games. The firm is headquartered in Shenzhen, the People's Republic of China.
Epic Games is one of the most well known game developers in the world, which rose to fame due to its Fortnite gaming brand. The company, like other game developers, is also targeting the metaverse industry which is seeing strong interest from large firms. Sony and The Lego Group invested a whopping $2 billion in Epic Games in 2022 to spur metaverse development.
Additionally, Epic Games' Unreal Engine, which is used by video game developers to develop their products, is capable of developing assets that support 3D visualization and augmented and virtual realities. Insider Monkey's Q2 2022 survey of 895 hedge funds revealed that two had invested in Tencent Holdings Limited (OTCMKTS:TCEHY).
Along with Meta Platforms, Inc. (NASDAQ:META), Advanced Micro Devices, Inc. (NASDAQ:AMD), and Microsoft Corporation (NASDAQ:MSFT), Tencent Holdings Limited (OTCMKTS:TCEHY) is a top virtual reality stock.
Number of Hedge Fund Holders: 4
MicroVision, Inc. (NASDAQ:MVIS) is an American company that develops sensors used in automobiles. Additionally, it also develops a scanning technology that enables the creation of large images for a full field of view. It also develops displays concepts, designs, and modules that are used in augmented and virtual reality headsets. The firm is headquartered in Redmond, Washington.
MicroVision, Inc. (NASDAQ:MVIS)'s lidar systems scored a big win in September 2022, when chip giant NVIDIA Corporation announced that the MAIN DR dynamic view system would be supported by NVIDIA's Drive AGX platform. This will Strengthen highway safety for vehicles.
By the end of its second fiscal quarter, MicroVision, Inc. (NASDAQ:MVIS) had $93 million in cash, which is important given the company's weak operating income profile. The firm has invested some of this into treasury securities, and its latest quarterly operating costs stood at $9.7 million - giving it plenty of runway room. Four out of the 895 hedge funds polled by Insider Monkey for their June quarter of 2022 portfolios had invested in the company.
MicroVision, Inc. (NASDAQ:MVIS)'s largest investor in our database is Daniel S. Och's Sculptor Capital which owns 572,200 shares that are worth $2.1 million.
Number of Hedge Fund Holders: 7
Matterport, Inc. (NASDAQ:MTTR) is an American company that caters to the front end of the virtual reality space. Its software applications allow developers to capture the depth and imagery of a physical space to create a virtual reality environment. The firm is headquartered in Sunnyvale, California.
Matterport, Inc. (NASDAQ:MTTR) reported a strong second fiscal quarter earlier this year, which despite negative revenue growth, saw the firm expand its presence in the market. At the earnings, the firm announced that its subscribers grew by a massive 52% annually to stand at 616,000 during the quarter.
Matterport, Inc. (NASDAQ:MTTR) also counts some of the largest companies in the world as its customers, with firms such as Proctor & Gamble, Sealy, and Netflix part of the 23% of the Fortune 1000 firms that use the company's products. Additionally, the firm's latest quarter also saw it grow its services revenue by 74% and its subscription revenue by 20%.
Insider Monkey took a look at 895 hedge funds for their second quarter of 2022 holdings to discover that 7 had invested in Matterport, Inc. (NASDAQ:MTTR).
Matterport, Inc. (NASDAQ:MTTR)'s largest investor is Chase Coleman and Feroz Dewan's Tiger Global Management LLC which owns 3.6 million shares that are worth $13 million.
Number of Hedge Fund Holders: 23
Unity Software Inc. (NYSE:U) is a software platform provider whose products allow its customers to develop 2D and 3D content for a wide variety of gadgets and devices such as smartphones, tablets, computers, gaming consoles, and virtual and augmented reality platforms. The firm is headquartered in San Francisco, California.
Unity Software Inc. (NYSE:U) is also aggressively targeting growth, with its research and development expenses during its second fiscal quarter representing close to 73% of its revenue. This opens up a large opportunity for explosive growth in the future, should these investments bear fruit.
Needham set a $50 share price target for the company in October 2022, stating that its software platform is one of the best in the world and will benefit from the strong growth in the demand for 3D content. 23 out of the 895 hedge funds polled by Insider Monkey during the second quarter of this year had invested in Unity Software Inc. (NYSE:U).
Out of these, Jim Davidson, Dave Roux, and Glenn Hutchins's Silver Lake Partners is Unity Software Inc. (NYSE:U)'s largest investor. It owns 34 million shares that are worth $1.2 billion.
Number of Hedge Fund Holders: 26
Roblox Corporation (NYSE:RBLX) is an online operating platform operator and developer whose studio allows developers to create and operate virtual 3D environments. The firm is headquartered in San Mateo, California, the United States.
Roblox Corporation (NYSE:RBLX) posted record high revenues of $600 million in its second fiscal quarter, which enabled it to cross $1 billion in revenue for the first half of this year. The company's extreme focus on its products led it to develop a voice chat feature for months before it finally rolled it out to users. Additionally, it has a creative advertising strategy, which creates a unique environment that lets users interact with the ad and then make potential purchases.
Roblox Corporation (NYSE:RBLX)'s platforms are also attractive for advertisers since they provide a large user base of young users that are yet to cement their buying preferences. Needham reduced the company's share price target to $53 from $55 in September 2022, stating that its advertising platform is one of a kind. Insider Monkey's Q2 2022 895 hedge fund survey saw 26 having held a stake in the company.
Roblox Corporation (NYSE:RBLX)'s largest investor is Jim Simons' Renaissance Technologies which owns 11.5 million shares that are worth $380 million.
Number of Hedge Fund Holders: 26
Sony Group Corporation (NYSE:SONY) is a Japanese multinational platform that designs and sells consumer electronics products and owns video game development platforms. The company is headquartered in Tokyo, Japan.
Sony Group Corporation (NYSE:SONY) operates in the hardware side of the virtual reality ecosystem, as it designs and sells the PlayStation PS VR headset. This headset has two modes, 3D and 2D modes. The former lets users view content in HDR resolution at 90Hz or 120Hz, and the latter lets them play games in HDR at 24Hz, 60Hz, and 120Hz.
When compared to some other virtual reality companies that have weak financials, Sony Group Corporation (NYSE:SONY) is an established player that has sold millions of units of its gaming consoles and brings in close to $100 billion in revenue each year. By the end of this year's second quarter, 26 of the 895 hedge funds surveyed by Insider Monkey had bought the company's shares.
Out of these, Sony Group Corporation (NYSE:SONY)'s largest investor is Mario Gabelli's GAMCO Investors which owns 1.7 million shares that are worth $146 million.
Advanced Micro Devices, Inc. (NASDAQ:AMD), Meta Platforms, Inc. (NASDAQ:META), and Microsoft Corporation (NASDAQ:MSFT), Sony Group Corporation (NYSE:SONY) is a VR stock you must look at.
Click to continue memorizing and see 5 Best Virtual Reality Stocks to Buy.
Disclosure: None. 11 Best Virtual Reality Stocks to Buy is originally published on Insider Monkey.
For artists, technologists, engineers, advertisers and dreamers, augmented reality (AR) is the holy grail of digital experience. This tech promises to make magic real: to manifest whatever we can imagine in physical space.
But we're not there yet. Today, most of what is called AR is not worthy of the name. Rather than being an augmentation of reality, it is a poor facsimile of a powerful idea.
So much more is possible.
In the past few weeks, the world has woken up to the boundless potential of AR to transform how we live, learn, work and interact. Apple's CEO Tim Cook says we will end up wondering how we ever lived without it.
But as we take the first steps into this bright future, it is more critical than ever that we wake up to a fundamental truth: No matter how vivid our digital creations, AR will fall short of its full promise unless and until these can be accurately placed in the real world and, more importantly, fully shareable with others.
Imagination is hard-wired into the human psyche. From early childhood, we embellish our outer worlds with elements of our inner lives. But since there is no way for those around us to tap into those private imaginings, they remain wholly subjective and unverifiable.
Whether or not a sensory experience is shared by others has a critical impact on whether we ourselves believe it to be real. If you are the only one in a crowded room to hear a whispering voice, you will feel isolated and strange. You may start to question your own perception — perhaps even your sanity.
But if others around you say they've heard it too, you're back on solid ground. What you've experienced is valid and therefore must be real.
This is what is known as intersubjectivity, the process of sharing knowledge and experiences with others.
Today, the vast majority of AR tech does not support intersubjective experiences. Indeed, it is often little more than gimmicky filters on our solitary devices that are difficult to share.
If I conjure up a fire-breathing dragon in my back garden, there is no way for me to photograph myself with it or to impress you with the breadth of its wingspan. And if I can't share the magic, it becomes no more satisfying than watching a YouTube video that can't be shared or scrolling through Facebook alone.
And while AR does have the potential to work real magic — to port the products of our imaginations into the physical world — the examples we have access to today are often no more remarkable than the artificial backgrounds on Zoom.
If we want AR to enable a true augmentation of reality, we need to use tech that supports shared digital experiences in the physical world.
Given AR's potential to transform everything from how we train fighter pilots to how doctors collaborate on cases, it is crucial that we address the issue of shared AR now or important interactive experiences will not be possible.
The answer is surprisingly simple. It all boils down to precise positioning.
Many assume that objects and environments that exist in AR are automatically anchored in a fixed location and that it should be easy for multiple people to experience the same things in the same places. The truth is this is never the case.
There are apps that offer rough estimates of where AR objects are placed in physical spaces but these are nowhere near accurate enough. You and your friend may be viewing the same AR unicorn in all its sparkling detail. But while your device may show it standing solidly by the door, hers may show it floating near the ceiling.
When positioning fidelity is this low, intersubjectivity simply isn't possible. While you may be together, your experience cannot meaningfully be described as a shared reality.
This shortcoming becomes particularly jarring when you and a friend or colleague try to engage in a shared physical activity involving digital equipment. Virtual tennis is an impossibility when the ball is in one place for you and somewhere else for your opponent. The same goes for racing digital cars. The list goes on.
The reason it's been so difficult to position AR objects in physical space until now is that our mobile devices don't share a consistent, precise coordinate system.
It's true that smartphones come equipped with GPS, which does make it possible to establish shared geographical parameters to some degree. But for a host of reasons, GPS is far from exact enough for true intersubjectivity.
GPS may be able to establish that an AR object is in a given house, but not whether it is in the bedroom or bathroom. Never mind whether it is sitting on top of a table or under it.
The logical solution to this would be a more precise version of GPS. That, however, would mean a system that is completely unaffected by those factors that hinder GPS fidelity, which range from signal blockage by physical obstacles to poor weather or even solar storms. Smartphone GPS is usually accurate to within a 4.9-meter radius, but only under a clear sky and away from buildings, bridges and trees.
The near-term fix for AR's location problem is much simpler, and billions of dollars less expensive. Rather than spending years on creating a hyper-accurate coordinate system, we should move away from geographical anchors altogether.
Instead of two devices trying to pinpoint their respective locations on a map, they merely need to establish where they are relative to one another. In other words, rather than relying on a fixed set of coordinates, devices should be equipped with technology that can create shared, one-off coordinate systems on an as-needed basis.
Say you and I want to race our digital Ferraris along a beach. With this technology, all we'd need to do is synchronize our devices so they "agree" on their relative positions. Once they have an accurate sense of where they are in an ephemeral space, shared reality is possible.
The larger-scale, more complex AR environments I foresee in the future may well one day require a universal 3D positioning system that uses powerful consensus algorithms and persistent location anchors.
But for today's augmented reality to be more than a buzzword, we need to focus on precise positioning and the technologies we can use right now to precisely share location and invite others into our enhanced reality. With these tools, we can transform AR from a gimmick into a technology that enhances all of our lives.
(Johannes Davidsson is the Head Of Business Development at Auki Labs, an AR tech company creating a decentralized protocol for collaborative spatial computing.)
Save a bundle on great fitness equipment this Prime Day - including $350 off this Fitness Reality 4000MR Magnetic Rower (opens in new tab), a massive 41% percent reduction on the full price. The 400MR offers 10 Preset workout programs and 5 customizable programs to keep you busy and has chain-driven dual transmission mechanism o provide the strength needed for intense workout.
The large contoured cushion seat (13. 5” L x 10” W) provides extra comfort for a long workout and the raised seat height of up to 22. 5” makes it easy get on and off the rower. Ball-bearing seat rollers make for smooth rowing strokes and the highly visible backlit 5” LCD displays distance, time, count, calories burned, RPM, watts, and tension levels to help you track your progress. This rower is suitable for users up to 6’5” and 300lbs, making it a perfect addition to any home gym.
If this isn't the model for you, have a look at our guide to the the best rowing machines (opens in new tab) for a round up of the best brands on the market, including brands like Hydrow (opens in new tab) and Ergatta (opens in new tab).
The Prime Day sale is a great time to snap up a rowing machine, and this model from Fitness Reality can help to take your cardio workouts to the next level. This rower is foldable and has wheels for even easier relocation or to storage. Folded dimensions are: 45” L x 25” W x 59” H and extended: 38” Slide rail with 41” inseam.
Dual independent rowing handles provide a full arm, shoulder and chest workout, and the rubber coaled handles have comfortable grip balls help to protect your hands while you work out. It also has a smartphone holder and AC adapter included.
Rowing is one of the best impact-free full-body workouts available. Each stroke naturally engages multiple muscle zones, while boosting your heart rate and burning calories. If you're looking to increase your fitness or use a rowing machine to lose weight, now is the time to buy!
Haven't found what you were looking for? Keep an eye out — the Prime Early Access Sale runs from October 11-12. If you're not yet a Prime member, it's also worth signing up for a free trial (opens in new tab). The Prime Early Access Sale rewards prime members with a wealth of wonderful discounts across various brands and categories.
For more, check out our Amazon Prime Day Health Deals for 2022 (opens in new tab). For more great deals on read our roundup of the best rowing machines on sale (opens in new tab).
The Nobel Prize in physics this year has been awarded "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science."
To understand what this means, and why this work is important, we need to understand how these experiments settled a long-running debate among physicists. And a key player in that debate was an Irish physicist named John Bell.
In the 1960s, Bell figured out how to translate a philosophical question about the nature of reality into a physical question that could be answered by science—and along the way broke down the distinction between what we know about the world and how the world really is.
We know that quantum objects have properties we don't usually ascribe to the objects of our ordinary lives. Sometimes light is a wave, sometimes it's a particle. Our fridge never does this.
When attempting to explain this sort of unusual behavior, there are two broad types of explanation we can imagine. One possibility is that we perceive the quantum world clearly, just as it is, and it just so happens to be unusual. Another possibility is that the quantum world is just like the ordinary world we know and love, but our view of it is distorted, so we can't see quantum reality clearly, as it is.
In the early decades of the 20th century, physicists were divided about which explanation was right. Among those who thought the quantum world just is unusual were figures such as Werner Heisenberg and Niels Bohr. Among those who thought the quantum world must be just like the ordinary world, and our view of it is simply foggy, were Albert Einstein and Erwin Schrödinger.
At the heart of this division is an unusual prediction of quantum theory. According to the theory, the properties of certain quantum systems that interact remain dependent on each other—even when the systems have been moved a great distance apart.
In 1935, the same year he devised his famous thought experiment involving a cat trapped in a box, Schrödinger coined the term "entanglement" for this phenomenon. He argued it is absurd to believe the world works this way.
The problem with entanglement
If entangled quantum systems really remain connected even when they are separated by large distances, it would seem they are somehow communicating with each other instantaneously. But this sort of connection is not allowed, according to Einstein's theory of relativity. Einstein called this idea "spooky action at a distance."
Again in 1935, Einstein, along with two colleagues, devised a thought experiment that showed quantum mechanics can't be giving us the whole story on entanglement. They thought there must be something more to the world that we can't yet see.
But as time passed, the question of how to interpret quantum theory became an academic footnote. The question seemed too philosophical, and in the 1940s many of the brightest minds in quantum physics were busy using the theory for a very practical project: building the atomic bomb.
It wasn't until the 1960s, when Irish physicist John Bell turned his mind to the problem of entanglement, that the scientific community realized this seemingly philosophical question could have a tangible answer.
Using a simple entangled system, Bell extended Einstein's 1935 thought experiment. He showed there was no way the quantum description could be incomplete while prohibiting "spooky action at a distance" and still matching the predictions of quantum theory.
Not great news for Einstein, it seems. But this was not an instant win for his opponents.
This is because it was not evident in the 1960s whether the predictions of quantum theory were indeed correct. To really prove Bell's point, someone had to put this philosophical argument about reality, transformed into a real physical system, to an experimental test.
And this, of course, is where two of this year's Nobel laureates enter the story. First John Clauser, and then Alain Aspect, performed the experiments on Bell's proposed system that ultimately showed the predictions of quantum mechanics to be accurate. As a result, unless we accept "spooky action at a distance," there is no further account of entangled quantum systems that can describe the observed quantum world.
So, Einstein was wrong?
It is perhaps a surprise, but these advances in quantum theory appear to have shown Einstein to be wrong on this point. That is, it seems we do not have a foggy view of a quantum world that is just like our ordinary world.
But the idea that we perceive clearly an inherently unusual quantum world is likewise too simplistic. And this provides one of the key philosophical lessons of this episode in quantum physics.
It is no longer clear we can reasonably talk about the quantum world beyond our scientific description of it—that is, beyond the information we have about it.
As this year's third Nobel laureate, Anton Zeilinger, put it: "The distinction between reality and our knowledge of reality, between reality and information, cannot be made. There is no way to refer to reality without using the information we have about it."
This distinction, which we commonly assume to underpin our ordinary picture of the world, is now irretrievably blurry. And we have John Bell to thank.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation: How philosophy turned into physics and reality turned into information (2022, October 7) retrieved 17 October 2022 from https://phys.org/news/2022-10-philosophy-physics-reality.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
"In The Loop" explores how Americans are dealing with the changing dating landscape in its new series "Love Life."
Reality TV has a long history.
A show titled "Candid Camera," which premiered on ABC back in 1948, holds the Guinness World Record for being the first TV reality show. It was essentially a prank show that involved playing practical jokes on unsuspecting members of the public and filming their reactions.
While there wasn't much dating going on on "Candid Camera," it was the origin of the voyeurism that feeds modern America's appetite for reality content.
It wasn't until 1965 that a reality love show — ABC's "The Dating Game" — first tried to take on cupid's role. Each episode featured an eligible single looking to go on a date with one of the other contestants. The catch was that the potential partners remained out of sight while the eligible bachelor or bachelorette made decisions based solely on their anonymous interactions.
If it sounds familiar, it's because that's pretty much the same concept as Netflix's "Love Is Blind," which first aired in 2020, but without the modern twist of talking pods and the 24/7 immersion where contestants live together in a home.
In the more than half a century since "The Dating Game" premiered, reality dating shows have become a behemoth in their own right, and their continued evolution with the times shows they're probably here to stay.
Andy Dehnart is the writer behind Reality Blurred, a blog where he chronicles and comments on the latest the reality TV genre has to offer.
"I think dating shows tap into a number of things that make us completely addicted to them, absorbed by them, fascinated by them," Dehnart said. "I think what has really sustained dating shows overall, especially 'The Bachelor' franchise which is the big one of course, is that it's turned into this ongoing soap opera."
By bringing back popular characters viewers have been attached to, Dehnart says the dating reality TV genre is able to keep viewers hooked onto the story.
As much as they appeal to us, do they also reflect how we view love and relationships? In other words, how realistic is romance in reality TV? Dehnart said that while the shows are not literally scripted, they're not quite real either.
"There's all kinds of components to reality TV shows, and every single one of those components introduces a possibility of artificiality and of lack of reality," Dehnart said. "That starts with the concept for the show and 'how realistic is this format?' Then when you cast people, are you casting people who are representative of average viewers or people in the region which are producing this show? Or, are you looking for super attractive people, people who are extremely outgoing and type-A? When you're on set and filming everything from the way someone acts in front of the camera to the way the cameras film them can affect how we see people."
Dehnart also said pre-production plans and post-production editing can play a big role in what we see as representations of love and relationships on reality TV.
When you think about the big money behind the scenes, that pre-production and editing effort makes sense. Reality TV is a money-making industry, and some of the most viewed dating shows pull in millions of viewers. Take for example the latest seasons of ABC's "The Bachelor" and "The Bachelorette," which pulled in an average of 3.6 and 2.8 million viewers, respectively. Viewership for streaming services is calculated differently, but Netflix said its "Love Is Blind" was viewed by 30 million member households.
But while raking in viewers and cash, for producers, it's also a genre that's influenced society's idea of love in general. The shows often feature the common romance tropes of "soul mates," "prince charming" and "love at first sight." Decisions are dramatized to an extent that kind of reflect fairytale aspirations and concepts.
While all those ideas might not be inherently bad, the problem is that reality dating shows also promote problematic behavior and concepts.
Social research suggests a link between viewing reality TV and gender stereotyping. Watching reality TV, as it turns out, has been associated with a strong adherence to a traditional masculine ideology and a greater endorsement of gendered and heterosexual roles for women.
Studies have also found evidence that sexually oriented reality TV, as many reality dating shows can be, is one factor in young people's willingness to engage in casual sex, along with the internet, social media, and their peers. That doesn't have to be a bad thing when it's ethical, safe and consensual, but these studies also note that sexual activity is seen differently for women and men. It's often viewed as empowering for the men but, on the other hand, lowers sexual agency for women.
"I think in dating shows, you get a lot of sexualization, especially of women, because again, that's something that happens in our culture and that gets mirrored and then perhaps amplified by a reality TV show," Dehnart said. "I think most importantly in casting, we don't really see most shows reflective of what America looks like for American reality TV shows. Hollywood can sell young and sexy, and therefore, you're going to see that more in a dating show."
That treatment of women and depiction of gender norms can have a real impact on young viewers especially. There hasn't been too much research into the effects of dating shows in particular, but one small Australian survey involving single people who watch reality dating shows found 54% of respondents "learn about relationships from reality TV shows about love," 48% said they "feel reassured that there are other people who are also looking for love," and just over one-third, 37%, agreed with the claim that "you can see yourself and your relationship in reality TV shows about love."
Another poll conducted by YouGov found that while 56% of Americans believe reality TV has a negative influence on society, and 58% think reality dating shows in particular are heavily scripted, 65% of Americans also indicated that they're interested in appearing on at least one of reality TV's many genres.
This season’s books have outdone themselves. From a Stephen King thriller (that may be his best ever) to a new Taylor Jenkins Reid novel and another Elin Hilderbrand book, we can barely read fast enough to dig through our growing pile.
While summer beach reads are a total vibe, doesn’t the back-to-school rush send everyone running to the bookstore as if we are all cracking open our trapper keepers and scanning our Scholastic Book Fair flyers stat? We may not be in school anymore, but the high of new beginnings definitely brings out the book lover in all of us.
Thankfully, all our favorite authors are here for us, and they’ve provided some really fantastic reads for those times when you simply want to escape reality. Our suggestions scan every genre for all sorts of readers. The one thing they have in common? They will whisk you away to another dimension.
RELATED: 20 Best Date Night Movies to Watch With Your Partner
Yes, this is a story about video games. But you don’t need to know anything about video games (I’ve never played a video game in my life, aside from Nintendo’s Mario Bros.) to fall deeply in love with Zevin’s characters. Under the video game theme, this is really about a friendship spanning time, heartbreak, sickness and everything in between.
Our favorite author (“Malibu Rising,” “The Seven Husbands of Evelyn Hugo,” and “Daisy Jones & The Six”) did it again. This time, the subject is tennis and our hero is Carrie. She’s a little tough to love, but aren’t we all?
Follow her journey from the time she’s a child until now, as she fights to redeem her title. The book is a little slow at the beginning, but it’ll grow on you until you find yourself memorizing this at 3 a.m., unable to stop until you learn whether or not Carrie is finally redeemed.
If you’ve loved any of Hilderbrand’s books, then you’ll fall for this one too. If you haven’t read any of Hilderbrand’s books, then you’ve got some really fun work to do.
“The Hotel Nantucket” is very similar to all the rest of her Nantucket books, but we love it all the same. It follows the rebirth of a hotel, along with all of the hotel’s employees and their dramas and romances.
This is a must-read for any human. It’s about a family and how they embrace and discover one of their children who was born a boy but wants to be a girl.
This is one of the most beautifully written, achingly emotional, surprisingly funny (at times) read.
It’s the book that’s got people talking at the moment, and if you like thrillers, you’ll love it. In fact, you’ll more than love it. You won’t be able to complete your life until you finish this. It’s dark, twisted, shocking, and it sucks you in from the first page with flawed characters, a creepy romance, and victims that may be villains.
Enter a portal to another world along with Charlie Reade, and you’ll be thrown into a parallel universe with suffering princesses and princes; plenty of dungeons; and a magical sundial that can take you back in time.
The king of horror is back, and yes, this may be his best book yet.
WASHINGTON — Last week a Russian radio station conducted an interview with an official in Kherson, one of the four regions illegally annexed by Russia as part of its invasion of Ukraine.
Like virtually all media in Russia, the station, Radio Rossii, follows an unspoken rule of hewing to the Kremlin’s line about the “special military operation” launched in February going more or less according to plan. That the spetzoperatsiya is a full-blown war, or that the war is going poorly, is a taboo syllabu within Russia.
Which made what happened next all the more remarkable. In the midst of the interview with the pro-Russian official in Kherson, one of the hosts asked, in a halfway hopeful tone, a question he all but answered: “So the situation is fine?” Only he did not quite get the expected — and required — response.
“The situation is difficult,” the Kherson administrator glumly admitted.
The telephone line went dead. The interview was over. Perhaps a faulty line had been at work, but given how little dissent is tolerated in Russian media outlets, the moment was revealing all the same.
So, for that matter, was the next segment, about a resident of Dagestan who was cooking plov (a kind of rice pilaf) for Russian soldiers, with the hosts fulsomely praising his patriotic efforts. The anti-mobilization protests that recently rattled Dagestan, a Muslim region in southern Russia, went utterly unmentioned.
However fleeting, reality has found a way to sneak through a Kremlin firewall bolstered by two decades of Vladimir Putin’s autocratic rule, not to mention the legacy of Soviet propaganda, which disguised the Kremlin’s failures and corruption while dutifully smearing dissidents. The Kremlin effectively controls every significant outlet in the country. Dissident outlets have been shut down; journalists have been harassed and, in many instances, murdered. Reporting on the war with the kind of critical scrutiny Western audiences expect is simply out of the question in Russia, at least for any journalist who wants to stay employed, free and alive.
But nearly eight months into a faltering war effort, difficult truths are becoming ever more difficult to ignore, from Ukraine’s blistering September counteroffensive to last weekend’s bombing of a bridge between Russia and Crimea. If the retreat of Russian forces from “annexed” parts of Donetsk and Luhansk has been chaotic, the efforts of top media outlets and personalities to explain away deflating developments have also been unruly — and frequently unconvincing.
“Putin relies on controlling the information space in Russia to safeguard his regime much more than on the kind of massive oppression apparatus the Soviet Union used,” argued a latest analysis from the Institute for the Study of War, “making disorder in the information space potentially even more dangerous to Putin than it was to the Soviets.”
As what was supposed to be a quick, triumphant invasion has devolved into a protracted war, many of Russia’s top media figures and outlets have struggled to stick to a coherent narrative that has some connection to reality — however faint — while not infuriating a Kremlin that considers media outlets to be functionally part of the state.
So as the Russian war effort deteriorates, Russian media is scrambling to adjust. If troops are performing poorly, then incompetent generals are at fault. And if Ukraine is winning, it is because the West despises Russia and will do everything in its power to destroy it. Despite that, Russia will ultimately prevail, Russian outlets say.
Thus, last weekend’s destruction of the Kerch Bridge, which provided a crucial link between the Russian mainland and the Crimean Peninsula, was not a military setback or a gross failure by Russian security services. Instead, it was described as a terrorist act by Ukrainian special forces working in concert with Western allies.
On Monday morning, Russia retaliated by firing rockets at Kyiv and other civilian population centers across Ukraine. Those strikes went unmentioned on Russia 1, by far the nation’s top television network, where the morning news featured a segment about roof repairs to apartment buildings in areas of Ukraine controlled by Russia. There were also at least two promotional segments, in the span of just a few minutes, about “Chaika” (“Seagull”), a new “Friday Night Lights”-style melodrama about women’s volleyball.
There was little sense that the war was escalating, that the Kremlin was becoming desperate, that the shelling of civilian areas — at least 19 people were reportedly killed — was out of bounds, especially given the supposed kinship of Russian and Ukrainian peoples. Instead, a viewer may have concluded that it was just an ordinary Monday morning in Moscow.
“Overall, the messages are pretty clear: that the war is still going in the right direction, on the whole,” says Ian Garner, a Russia scholar who closely observes the nation’s media outlets. “There are lots of references to World War II and the idea that there was a catastrophic retreat for the first year of the war but eventually the tide was turned — that this period of retreat and suffering was just something that Russians had to go through.”
Even as they allow a measure of frustration into their broadcasts, the Kremlin’s top propagandists continue to offer rousing defenses of the war. “They think they’ve won. They don’t understand that nothing has begun,” top television personality Vladimir Solovyov raged late last week at Ukraine and its allies.
“There will be no white smoke” of surrender emanating from the Kremlin, Solovyov assured his audience.
While the information firewall ringing Russia may not be cracking, it is looking more and more perilous with each new setback for an invasion that was supposed to conclude with a quick capitulation in Kyiv. Instead, an emboldened Ukrainian defense has been pushing back Russian forces, reclaiming ground lost throughout the winter and early spring. In response, the Kremlin has ordered a mobilization of hundreds of thousands of young men to replenish its depleted frontlines, triggering protests that have led to confrontations with police across Russia.
So even as Ukrainian forces continued to make advances, one television commentator on the talk show “Planeta” suggested that Ukrainian President Volodymyr Zelensky should immediately order his troops to surrender. “Armed Forces of Ukraine have no chance,” read a headline on the Pravda website, referencing Monday’s strikes on Kyiv, which terrified civilians but appeared to have no operational value for Russia.
For a Russian nation that defeated both Hitler and Napoleon, these losses have been difficult to stomach, even though their scope is almost always kept hidden from public view. “The headline is always that things are much worse for somebody else,” says Garner. Ukrainian losses invariably make headlines, even if Russian losses are much worse. Incremental advances by Russia are inflated. Conspiracy theories abound, with the British MI6 and American CIA intelligence services frequently accused of directing the Ukrainian resistance.
Still, increasingly public criticisms of Defense Minister Sergei Shoigu and his generals by Yevgeny Prighozhin (who runs a paramilitary outfit known as the Wagner Group, which reports directly to the Kremlin) and Chechen warlord Ramzan Kadyrov, a fanatical Kremlin ally, are obvious signals to ordinary Russians that the war is going amiss. Even though those criticisms also elide the fact that ultimate responsibility for the invasion does not reside with Shoigu, they nevertheless recognize that the Russian public is aware, however faintly, that the special operation is faltering.
“You accept elements of the disasters, but then you say that under Putin’s leadership — now he’s finding out about these things — these initial errors are being resolved,” Garner points out. Indeed, criticism of Putin himself remains nonexistent, an understandable calculation given his low tolerance for public dissent. This approach hearkens back to the “if Stalin knew” rhetoric of Soviet propaganda, which always found low-level Kremlin functionaries to blame for disasters that dictator Joseph Stalin surely would have no tolerance for, had he been properly informed.
In reality, the responsibility lay then, as it does now, with the very highest strata of official power.
One crucial difference between Soviet propaganda and its modern-day descendants is that social media platforms were not available when Stalin dispatched millions to death or imprisonment amid his notorious show trials of the 1930s, or when Mikhail Gorbachev kept silent about the nuclear meltdown at Chernobyl in 1986. The popularity of the social media platform Telegram makes it more difficult for traditional media to keep to a relentless optimism, especially since many of Russia’s best-known traditional media figures are on Telegram themselves.
Another emerging challenge to the Kremlin comes in the form of Russian military bloggers, or milbloggers, who are ardent supporters of the war but are unafraid to say it is going poorly. It was the milbloggers, for example, who revealed how chaotic Putin’s “partial mobilization” of 300,000 troops had quickly become.
Images of poorly prepared, sometimes drunk recruits, armed with comically rusty rifles, quickly made their way onto social media, making reality impossible to ignore. Needing someone to blame, outlets focused on “local recruitment officers,” says Oxford professor and Russia expert Samuel Ramani.
“The regime seems unable or unwilling to control milbloggers,” Center for European Policy Analysis senior fellow Olga Lautman wrote in a mid-September analysis of Russia’s media landscape, despite Kremlin efforts to court the bloggers and focus their rage.
Evidently aware that bad news was filtering through, pundit Armen Gasparyan recently advised Russians to exercise “self-censorship” against stories of Russia’s military collapse. “Stop memorizing ideological opponents,” who are working to instill “panic” in Russian society, he said.
But not even a zealous Putinite like Gasparyan could keep from acknowledging the scope of latest losses against a Ukraine bolstered by the West.
“Yes, I understand, and I will be the first to say it, Lyman was very painful,” he said, referring to last week’s retreat by Russian forces from a strategic Ukrainian city it had previously controlled. “But,” he added, “with us, historically, this is how it is. Don’t get hysterical. Everything is in our hands.”
Ramani of Oxford notes that blame for the retreat from Lyman fell on Gen. Aleksandr Lapin (despite the fact that some milbloggers came to his defense). But as the military correspondent Yuri Kotenok pointed out, blaming Lapin is misplaced. He instead blamed the sorry state of the Russian military and the Kremlin leadership seemingly being blind to the situation in Ukraine. “The saddest thing is that such truth about the real state of affairs does not reach the top,” Kotenok wrote for the website Today.
But such flirtations with honesty are rare. In a Monday column, Komsomolskaya Pravda military correspondent Alexander Kotz argued that the attack on the Kerch Bridge should inspire the Russian military to undertake similarly bold actions. “Let’s fight with more anger, for real, without excuses about the impossibility of blowing up the bridge on which weapons are coming from the West,” Kotz seethed. “The Ukrainians have shown us that nothing is impossible.”
Q: We are under contract to buy a property in the path of the latest hurricane. What happens now? — Anonymous
A: Whether it is a hurricane or any other natural disaster, you need to review the purchase contract.
The first clause to look for is often called the “force majeure” clause. Force majeure is loosely translated to “greater force,” referring to casualties caused through no fault of the parties. This can mean natural disasters, like hurricanes, or issues caused by fire and other man-made calamities that cannot be overcome.
This clause extends the contract’s deadlines several days after the disaster, and its immediate effects, have passed. This will put your closing on pause until you can assess the damage, bind insurance, and arrange to close escrow.
Next, review the “risk of loss” clause, which concerns which party is responsible if the property becomes damaged by force majeure during the contract. If the risk of loss is not addressed in the written agreement, it is dealt with under common law.
Common law contains a legal principle called “equitable conversion” that distinguishes between the two components of owning real estate, equitable title and legal title. Simplified, equitable title is the right of someone to obtain full ownership, along with its benefits. Legal title is the real ownership of the land, such as whose name is on the deed.
What every South Floridian – newcomer or native – should know. Get insider tips, information and happenings.
Typically, property owners possess both components. However, certain situations, like contracting to sell your property, will split the ownership.
When signing a contract to purchase the property, the buyer becomes the equitable owner, while the seller remains the legal owner of the property. When they close escrow at the end of the process, the seller deeds the legal title to the buyer.
This creates a problem for sensible buyers because the risk of loss due to damage to the property is part of the equitable title, meaning that the buyer is on the hook for damage to a property she has yet to buy.
Because of this, almost every real estate contract shifts that risk back to the seller. The agreement will usually set a dollar limit, usually one or two percent of the purchase price, that the seller must credit the buyer for the unexpected damage. If the damage exceeds this small amount, the contract can be canceled.
For your transaction, you will need to use the extra time given by the force majeure clause to get detailed inspections to determine what was damaged and how much the repair will cost. If that amount exceeds the small threshold in the risk of loss clause, you can cancel the contract and get the deposit back.
Of course, you can try renegotiating the deal taking the damage into account.
Board-certified real estate lawyer Gary Singer writes about industry legal matters and the housing market. To ask him a question, email him at firstname.lastname@example.org, or go to SunSentinel.com/askpro.