Completely free 000-M191 exam braindumps are provided by killexams.com

If you will not get your exam pass by studying just 000-M191 course books and eBooks, Visit killexams.com and download 000-M191 practice test. You can download 100% free pdf download to evaluate before you purchase full variety. This will demonstrate your best decision toward success. Just memorize the 000-M191 pdf download, practice with VCE exam simulator and the work is done.

Exam Code: 000-M191 Practice exam 2022 by Killexams.com team
Tivoli Storage Sales Mastery Test v2
IBM Storage test
Killexams : IBM Storage test - BingNews https://killexams.com/pass4sure/exam-detail/000-M191 Search results Killexams : IBM Storage test - BingNews https://killexams.com/pass4sure/exam-detail/000-M191 https://killexams.com/exam_list/IBM Killexams : CXL Borgs IBM’s OpenCAPI, Weaves Memory Fabrics With 3.0 Spec

System architects are often impatient about the future, especially when they can see something good coming down the pike. And thus, we can expect a certain amount of healthy and excited frustration when it comes to the Compute Express Link, or CXL, interconnect created by Intel, which with the absorption of Gen-Z technology from Hewlett Packard Enterprise and now OpenCAPI technology from IBM will become the standard for memory fabrics across compute engines for the foreseeable future.

The CXL 2.0 specification, which brings memory pooling across the PCI-Express 5.0 peripheral interconnect, will soon available on CPU engines. Which is great. But all eyes are already turning to the just-released CXL 3.0 specification, which rides atop the PCI-Express 6.0 interconnect coming in 2023 with 2X the bandwidth, and people are already contemplating what another 2X of bandwidth might offer with CXL 4.0 atop PCI-Express 7.0 coming in 2025.

In a way, we expect for CXL to follow the path blazed by IBM’s “Bluelink” OpenCAPI interconnect. Big Blue used the Bluelink interconnect in the “Cumulus” and “Nimbus” Power9 processors to provide NUMA interconnects across multiple processors, to run the NVLink protocol from Nvidia to provide memory coherence across the Power9 CPU and the Nvidia “Volta” V100 GPU accelerators, and to provide more generic memory coherent links to other kinds of accelerators through OpenCAPI ports. But the path that OpenCAPI and CXL will not be exactly the same, obviously. OpenCAPI is kaput and CXL is the standard for memory coherence in the datacenter.

IBM put faster OpenCAPI ports on the “Cirrus” Power10 processors, and they are used to provide those NUMA links as with the Power9 chips as well as a new OpenCAPI Memory Interface that uses the Bluelink SerDes as a memory controller, which runs a bit slower than a DDR4 or DDR5 controller but which takes up a lot less chip real estate and burns less power – and has the virtue of being exactly like the other I/O in the chip. In theory, IBM could have supported the CXL and NVLink protocols running atop its OpenCAPI interconnect on Power10, but there are some sour grapes there with Nvidia that we don’t understand – it seems foolish not to offer memory coherence with Nvidia’s current “Ampere” A100 and impending “Hopper” H100 GPUs. There may be an impedance mismatch between IBM and Nvidia in regards to signaling rates and lane counts between OpenCAPI and NVLink. IBM has PCI-Express 5.0 controllers on its Power10 chips – these are unique controllers and are not the Bluelink SerDes – and therefore could have supported the CXL coherence protocol, but as far as we know, Big Blue has chosen not to do that, either.

Given that we think CXL is the way a lot of GPU accelerators and their memories will link to CPUs in the future, this strategy by IBM seems odd. We are therefore nudging IBM to do a Power10+ processor with support for CXL 2.0 and NVLink 3.0 coherent links as well as with higher core counts and maybe higher clock speeds, perhaps in a year or a year and a half from now. There is no reason IBM cannot get some of the AI and HPC budget given the substantial advantages of its OpenCAPI memory, which is driving 818 GB/sec of memory bandwidth out of a dual chip module with 24 cores. We also expect for future datacenter GPU compute engines from Nvidia will support CXL in some fashion, but exactly how it will sit side-by-side with or merge with NVLink is unclear.

It is also unclear how the Gen-Z intellectual property donated to the CXL Consortium by HPE back in November 2021 and the OpenCAPI intellectual property donated to the organization steering CXL by IBM last week will be used to forge a CXL 4.0 standard, but these two system vendors are offering up what they have to help the CXL effort along. For which they should be commended. That said, we think both Gen-Z and OpenCAPI were way ahead of CXL and could have easily been tapped as in-node and inter-node memory and accelerator fabrics in their own right. HPE had a very elegant set of memory fabric switches and optical transceivers already designed, and IBM is the only CPU provider that offered CPU-GPU coherence across Nvidia GPUs and the ability to hook memory inside the box or across boxes over its OpenCAPI Memory Interface riding atop the Bluelink SerDes. (AMD is offering CPU-GPU coherence across its custom “Trento” Epyc 7003 series processors and its “Aldebaran” Instinct MI250X GPU accelerators in the “Frontier” exascale supercomputer at Oak Ridge National Laboratories.)

We are convinced that the Gen-Z and OpenCAPI technology will help make CXL better, and Excellerate the kinds and varieties of coherence that are offered. CXL initially offered a kind of asymmetrical coherence, where CPUs can read and write to remote memories in accelerators as if they are local but using the PCI-Express bus instead of a proprietary NUMA interconnect – that is a vast oversimplification – rather than having full cache coherence across the CPUs and accelerators, which has a lot of overhead and which would have an impedance mismatch of its own because PCI-Express was, in days gone by, slower than a NUMA interconnect.

But as we have pointed out before, with PCI-Express doubling its speed every two years or so and latencies holding steady as that bandwidth jumps, we think there is a good chance that CXL will emerge as a kind of universal NUMA interconnect and memory controller, much as IBM has done with OpenCAPI, and Intel has suggested this for both CXL memory and CXL NUMA and Marvell certainly thinks that way about CXL memory as well. And that is why with CXL 3.0, the protocol is offering what is called “enhanced coherency,” which is another way of saying that it is precisely the kind of full coherency between devices that, for example, Nvidia offers across clusters of GPUs on an NVSwitch network or IBM offered between Power9 CPUs and Nvidia Volta GPUs. The kind of full coherency that Intel did not want to do in the beginning. What this means is that devices supporting the CXL.memory sub-protocol can access each other’s memory directly, not asymmetrically, across a CXL switch or a direct point-to-point network.

There is no reason why CXL cannot be the foundation of a memory area network as IBM has created with its “memory inception” implementation of OpenCAPI memory on the Power10 chip, either. As Intel and Marvell have shown in their conceptual presentations, the palette of chippery and interconnects is wide open with a standard like CXL, and improving it across many vectors is important. The industry let Intel win this one, and we will be better off in the long run because of it. Intel has largely let go of CXL and now all kinds of outside innovation can be brought to bear.

Ditto for the Universal Chiplet Interconnect Express being promoted by Intel as a standard for linking chiplets inside of compute engine sockets. Basically, we will live in a world where PCI-Express running UCI-Express connects chiplets inside of a socket, PCI-Express running CXL connects sockets and chips within a node (which is becoming increasingly ephemeral), and PCI-Express switch fabrics spanning a few racks or maybe even a row someday use CXL to link CPUs, accelerators, memory, and flash all together into disaggregated and composable virtual hardware servers.

For now, what is on the immediate horizon is CXL 3.0 running atop the PCI-Express 6.0 transport, and here is how CXL 3.0 is stacking up against the prior CXL 1.0/1.1 release and the current CXL 2.0 release on top of PCI-Express 5.0 transports:

When the CXL protocol is running in I/O mode – what is called CXL.io – it is essentially just the same as the PCI-Express peripheral protocol for I/O devices. The CXL.cache and CXL.memory protocols add caching and memory addressing atop the PCI-Express transport, and run at about half the latency of the PCI-Express protocol. To put some numbers on this, as we did back in September 2021 when talking to Intel, the CXL protocol specification requires that a snoop response on a snoop command when a cache line is missed has to be under 50 nanoseconds, pin to pin, and for memory reads, pin to pin, latency has to be under 80 nanoseconds. By contrast, a local DDR4 memory access one a CPU socket is around 80 nanoseconds, and a NUMA access to far memory in an adjacent CPU socket is around 135 nanoseconds in a typical X86 server.

With the CXL 3.0 protocol running atop the PCI-Express 6.0 transport, the bandwidth is being doubled on all three types of drivers without any increase in latency. That bandwidth increase, to 256 GB/sec across x16 lanes (including both directions) is thanks to the 256 byte flow control unit, or flit, fixed packet size (which is larger than the 64 byte packet used in the PCI-Express 5.0 transport) and the PAM-4 pulsed amplitude modulation encoding that doubles up the bits per signal on the PCI-Express transport. The PCI-Express protocol uses a combination of cyclic redundancy check (CRC) and three-way forward error correction (FEC) algorithms to protect the data being transported across the wire, which is a better method than was employed with prior PCI-Express protocols and hence why PCI-Express 6.0 and therefore CXL 3.0 will have much better performance for memory devices.

The CXL 3.0 protocol does have a low latency CRC algorithm that breaks the 256 B flits into 128 B half flits and does its CRC check and transmissions on these subflits, which can reduce latencies in transmissions by somewhere between 2 nanosecond and 5 nanoseconds.

The neat new thing coming with CXL 3.0 is memory sharing, and this is distinct from the memory pooling that was available with CXL 2.0. Here is what memory pooling looks like:

With memory pooling, you put a glorified PCI-Express switch that speaks CXL between hosts with CPUs and enclosures with accelerators with their own memories or just blocks of raw memory – with or without a fabric manager – and you allocate the accelerators (and their memory) or the memory capacity to the hosts as needed. As the diagram above shows on the right, you can do a point to point interconnect between all hosts and all accelerators or memory devices without a switch, too, if you want to hard code a PCI-Express topology for them to link on.

With CXL 3.0 memory sharing, memory out on a device can be literally shared simultaneously with multiple hosts at the same time. This chart below shows the combination of device shared memory and coherent copies of shared regions enabled by CXL 3.0:

System and cluster designers will be able to mix and match memory pooling and memory sharing techniques with CXL 3.0. CXL 3.0 will allow for multiple layers of switches, too, which was not possible with CXL 2.0, and therefore you can imagine PCI-Express networks with various topologies and layers being able to lash together all kinds of devices and memories into switch fabrics. Spine/leaf networks common among hyperscalers and cloud builders are possible, including devices that just share their cache, devices that just share their memory, and devices that share their cache and memory. (That is Type 1, Type 3, and Type 2 in the CXL device nomenclature.)

The CXL fabric is what will be truly useful and what is enabled in the 3.0 specification. With a fabric, a you get a software-defined, dynamic network of CXL-enabled devices instead of a static network set up with a specific topology linking specific CXL devices. Here is a simple example of a non-tree topology implemented in a fabric that was not possible with CXL 2.0:

And here is the neat bit. The CXL 3.0 fabric can stretch to 4,096 CXL devices. Now, ask yourself this: How many of the big iron NUMA systems and HPC or AI supercomputers in the world have more than 4,096 devices? Not as many as you think. And so, as we have been saying for years now, for a certain class of clustered systems, whether the nodes are loosely or tightly coupled at their memories, a PCI-Express fabric running CXL is just about all they are going to need for networking. Ethernet or InfiniBand will just be used to talk to the outside world. We would expect to see flash devices front-ended by DRAM as a fast cache as the hardware under storage clusters, too. (Optane 3D XPoint persistent memory is no longer an option. But there is always hope for some form of PCM memory or another form of ReRAM. Don’t hold your breath, though.)

As we sit here mulling all of this over, we can’t help thinking about how memory sharing might simplify the programming of HPC and AI applications, especially if there is enough compute in the shared memory to do some collective operations on data as it is processed. There are all kinds of interesting possibilities. . . .

Anyway, making CXL fabrics is going to be interesting, and it will be the heart of many system architectures. The trick will be sharing the memory to drive down the effective cost of DRAM – research by Microsoft Azure showed that on its cloud, memory capacity utilization was only an average of about 40 percent, and half of the VMs running never touched more than half of the memory allocated to their hypervisors from the underlying hardware – to pay for the flexibility that comes through CXL switching and composability for devices with memory and devices as memory.

What we want, and what we have always wanted, was a memory-centric systems architecture that allows all kinds of compute engines to share data in memory as it is being manipulated and to move that data as little as possible. This is the road to higher energy efficiency in systems, at least in theory. Within a few years, we will get to test this all out in practice, and it is legitimately exciting. All we need now is PCI-Express 7.0 two years earlier and we can have some real fun.

Tue, 09 Aug 2022 06:18:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/08/09/cxl-borgs-ibms-opencapi-weaves-memory-fabrics-with-3-0-spec/
Killexams : From Floppies to Solid State: The Evolution of PC Storage Media

Since the dawn of computing, we've struggled with how to store all this digital stuff. International Business Machines helped launch the PC revolution in the 1980s, but computers were dealing with storage issues long before that. In fact, that same company had the first hard disk drive running back in 1956—a 2,000-pound unit that cost $35,000 per year to operate.

It also held only 5 megabytes (MB). But just look at how streamlined that thing is.

Other ways to store data existed in those early days, from punch cards to giant, reel-to-reel magnetic tape machines. Thankfully, by the time PCs first made it to our offices and living rooms, storage devices were substantially smaller, if not yet as small as what we carry in our pockets today.

Let's look back at what it took to store data on a PC from the early days through today. It should give you a whole new appreciation for the size, speed, and capacity of today’s latest storage methods.


1. 5.25-Inch Floppy Disk

A 5.25-inch floppy drive from an original IBM PC
A 5.25-inch floppy drive from an original IBM PC

IBM created the floppy drive as a means of read-only magnetic storage in 1972. Floppy disks originally came in a size of 203.2mm, which is close enough to 8 inches for that to be the moniker used. The round disk inside was in a permanent flexible (floppy) jacket to keep fingers off.

The eight-inch size didn't stick around for very long. Steve Wozniak designed the first external Apple II disk drive in 1978; it used a 5.25-inch floppy disk. Soon, Commodore, Tandy, and Atari adopted the same format.

The original IBM PC 5150 that debuted in August 1981 offered the option of one or two internal 5.25-inch floppy drives. Each floppy diskette could hold 160 kilobytes on one side, or 320KB if you could use both (not all disks were double-sided). The drives required a controller card on the motherboard and were connected with ribbon cables. Back then, having two floppy drives made a huge difference because one of them could hold the operating system while the other drive loaded a program, such as Lotus 1-2-3. You wouldn't have to swap disks.

Hard drives soon became the permanent, long-term data storage standard, and next-generation floppy disks would soon take over for portability, both of which we'll get to below. The 5.25-inch floppy was fully ejected by 1994.


2. Cassette Tape

Iomega Ditto
Iomega Ditto

Magnetic tape isn't that far different from a floppy disk, although it's a lot slower when accessing stored data. In the 1980s, computer software was often sold on cassette tape, just like music albums. Cassette recorders were available for home computers such as the Apple II and Commodore 64.

The original IBM PC also had a port for one. A 90-minute cassette could hold about a megabyte of data. But few developers sold PC software on tape because the computer almost always came with at least one floppy drive. IBM soon dropped the 5-pin DIN cassette port on its later systems, but it continued to sell the original 5150 right up through 1987 without a floppy drive if a customer preferred tape.

Why include a port for tape at all? Some people wanted to run a version of BASIC called Cassette BASIC that only worked off of tape, and DOS had no cassette tape support (DOS stood for Disk Operating System, after all). And because tape was the cheapest storage available.

Third parties made proprietary tape-based drives for backup, such as Iomega and its Ditto drive of the 1990s. Iomega gave it up and sold off the tape drive biz before the end of the decade. 

Unlike the floppy drive, however, tape has never gone away. You can still buy uber-expensive cartridge drives using the Linear Tape-Open (LTO) spec for massive backup use—usually they’re found in enterprises, backing up servers full of important data.


3. 3.5-Inch Floppy Disk

3.5-inch Floppy Disks
3.5-inch floppy disks

The 3.5-inch floppy disk is the universal iconic symbol for saving your work for a reason. The smaller disk wasn't as floppy as 8-inch and 5.25-inch diskettes because the 3.5-inch version came inside a hard plastic shell. It did, however, become the top-selling removable storage medium by 1988. This despite a limited capacity: first 720KB, then in a high-density 1.44MB version. IBM made a 2.88MB double-sided extended-density diskette for the IBM PS/2, but that standard went nowhere.

3.5-inch floppies were a mainstay of PC software well into the 90s; five billion 3.5-inch floppies were in use by 1996.

But the small diskettes couldn’t keep up with the demands of bloated software. At one point, for example, Microsoft shipped a version of Windows 98 that required sequentially inserting 21 different floppy disks to install it on a hard drive. Microsoft Office required almost twice that many. You could build up your arm muscles by replacing disks while installing software to a hard drive. Sony, one of the biggest manufacturers, stopped making 3.5-inch floppies in 2011.


4. Hard Disk Drive

The Seagate ST-412 Hard Disk Drive from the Original IBM PC
A Seagate ST-412 hard disk drive from an original IBM PC

Hard disk drives (HDDs) were nothing new in 1982, but a hard drive didn’t make it into the first IBM PC. Instead, the world (and PC Magazine) awaited the second-generation eXTension (XT) model. The PC XT included a standard 10MB HDD, which we called "certainly significant" in our Feb-Apr 1983 issue. The drive required a new power supply and a BIOS update, all of which contributed to the XT's much higher price of $4,995 (that’s $14,380 with 2022 inflation).

The IBM PC's first HDD was the Seagate Technology Model ST-412. The interface between it and the motherboard became the de facto disk drive standard for several years.

Entire books have been written about HDDs (though one book entitled Hard DriveHard Drive was about the hard-driving influence of Microsoft). The impact of spacious, local, re-writable storage on a platter changed everything. Hard drives continued to dominate system storage decades later due to their overall reliability and ever-increasing speed and capacity.

Today, you can find 20-terabyte (TB) internal hard drives on the market, such as the Seagate Exos X20 for $389. That company alone has shipped a full 3 zettabytes of hard drive storage capacity as of 2021—the equivalent of 150,000,000 hard drives with 20TB each.


5. Zip Disk

Zip Disks
Zip disks

The Zip Drive and its high-capacity floppy disks never really replaced the standard floppy, but of the many “superfloppy” products that tried, only Iomega’s came close. The company had limited success with its Bernoulli Box removable floppies in the 1980s. But the 1994 debut of the very affordable Zip Drive put Bernoulli on a whole other level.

Zip disks were the first to hold 100MB of data each; subsequent releases went to 250MB and even 750MB in 2002. Bernoulli also survived the famous Click of Death lawsuit in 1998. By 2003, Iomega had shipped some 50 million Zip Drives.

But timing is everything. Zip Drives were caught between the era of the floppy and the onslaught of writable CDs that could seek data much faster, plus local networks that made file transfers much easier. EMC bought Iomega, and soon partnered with Lenovo before killing off the Zip drive line.


6. Jaz Disk

Jaz Disk
A Jaz disk

Following the debut of the popular Zip disk, Iomega tried to build on that success in 1995 with the Jaz. The thicker Jaz format boosted capacity to 1GB per disk, and then to 2GB by 1998—perfect for creatives who needed copious amounts of storage.

Iomega marketed the Jaz mainly as a $500 external drive, although an internal version was available, which the Zip also had as an option. The Jaz drive connected via a SCSI interface, which was big on the Macintosh, though some later models connected to parallel ports. A SCSI adapter worked with USB and even FireWire.

The Jaz had some of the same issues as the Zip, however, including the Click of Death problem and overheating. Like the Zip, the Jaz also pushed up against the coming of the CD and CD-R, and couldn't compete on price.


7. USB Flash Drive

A USB Drive in a Swiss Army Knife
A USB drive in a Swiss army knife

2000 saw the first ever Universal Serial Bus (USB)-based flash-memory drive, the ThumbDrive from Trek Technology. A holy matrimony between the easy-to-use and now mainstream USB port and (finally inexpensive) non-volatile NAND flash memory, the ThumbDrive was among the first chips that didn’t require power to retain data. IBM’s first flash drive that same year, the DiskOnKey, held 8MB for $50.

Soon, the floodgates opened. Tons of companies made small, fast, somewhat-high-capacity solid-state drives as big as your thumb. Many sued each other. It took years for Trek to win the US copyright to the name “Thumbdrive” in 2010, by which time the term was genericized—but that win is also why PCMag and others now call them “flash drives” instead.

That initial 1.0 USB specification gave way to the 30x faster speeds of 2.0, which only helped flash memory drives. By 2004 the first 1GB flash drive shipped.

Today, USB flash drives typically use USB 3.0 with a read speed of 120 MB per second. In our Best USB Flash Drives for 2022, we tested devices with the old-school USB-A connector (Samsung Bar Plus 128GB USB 3.1 Flash Drive for $21) and even some that use the faster USB-C (SanDisk Ultra Dual Drive 128GB USB Type-C Flash Drive for $18). We’ve seen them in all shapes (including as part of a Swiss army knife), some with incredibly high capacity, some with security locks integrated, and other crazy things. For all-around ease of use coupled with secure, mobile storage, USB flash drives remain hard to beat.


8. Memory Card

Memory Cards
SD cards

It's not unfair to think of memory cards as USB flash drives without the USB. The cards can be much, much smaller. While they work as media storage for PCs, they’re more likely to be found in even smaller devices, requiring you to have an adapter for your PC to read them.

The first “cards” were the large PCMCIA devices that came in sized like a credit card, albeit substantially thicker. This gave way in the mid-1990s to Compact Flash, a format that you can still find in devices today—then to Toshiba’s SmartMedia Card (SMC), a NAND-based flash memory that held as much as 128MB on a card only 0.76mm thick.

Memory cards have had many subsequent names and sizes and shapes in the last two decades: Multimedia Cards (MMC), Secure Digital (SD), SmartMedia, Memory Stick, XD-Card, and Universal Flash Storage (UFS), among others. Eventually, the most popular, SD, got smaller via miniSD and microSD; they remain the most prevalent today.

Originally, memory cards were meant to replace floppy disks or even the high-capacity ones like the Zip. But the tiny size made them ideal to become the digital replacement for film in cameras. The memory card propelled the age of digital photography. Today, support for memory cards in Android-based smartphones ebbs and flows (usually, it ebbs). Some memory cards are even specific to various brands and generations of game consoles.


9. CD-ROM

Using a CD-ROM in a laptop
Using a CD-ROM with a laptop

The read-only memory that changed the world. The fully-optical-and-digital compact disc full of data held up to 650MB on 1.2 mm of polycarbonate plastic with a reflective aluminum surface. They could be read only by a laser. CD-ROM became the standard for software and video game distribution in the late 1980s and persisted through the 90s. (Music CDs are similar, but they use a different format, although computer CD drives could eventually read those, too.)

The CD-ROM's only downside is that it is read-only memory (it's right there in the name). Users couldn’t write data to it. This did however make them ideal to software and game distributors who liked that it was easy to copy-protect.

The faster the drive could spin the CD-ROM, the faster the data could be accessed. The base standard of 1x was about 150Kb per second, but eventually, drives were hitting 52x or even 72x, but with some physical world caveats.


10. CD-R and CD-RW

CD-R and a CD-RW with printable surface
A CD-R and a CD-RW with a printable surface

The compact disc-recordable (CD-R) was originally called the CD-Write-Once and uses some of the same technology as the earlier magneto-optical drive—the ability to write your data to a disc one time only for backup or distribution. You could write to CD-Rs in the audio format (“Red Book”) holding up to 80 minutes of music or data format (“Yellow Book”) with 700MB of info, and they’d work in regular CD players or CD-ROM drives most of the time. The CD-R format was part of the “Orange Book” standard, and writing to CD-Rs became known as “burning” a CD.

You can still easily buy CD-R media online. Verbatim sells a 100 disc pack for $19.22 on Amazon.

Another Orange Book-based product introduced in 1997, the CD-RW became the first truly mainstream optical media option that let you write to the disc, erase it, and write to it again. You couldn't do it forever, maybe 1,000 times, but that’s still a lot. They’re almost the same as CD-Rs but with different reflective layers to facilitate erasure and re-writing.

The biggest drawback of CD-RWs is that not all older CD and CD-R drives will read them. They also don’t necessarily last as long as the original CD-ROMs. A spindle of 50 CD-RW discs from Verbatim currently sells for $31. Plus, they’re printable—you can run them through select printers to label them.


11. DVD and DVD±RW

Stacks of DVD-R Disks
Stacks of DVD-R disks

The Digital Video Disc, or Digital Versatile Disc depending on who you ask, also came in the late 1990s and became the primo way to distribute high-end video of films quickly. It was better than LaserDisc because it was much smaller, it included sharper digital video, and it also didn't need to be flipped halfway through a movie. The DVD was enough to replace VHS and also get Netflix off the ground in 1998 as a mail-order movie rental biz. Remember those red envelopes?

There are types of re-writeable DVD: the standard “dash” format (DVD-R/RW) from 1997 and the plus (DVD+R/RW) from 2002. Different industry consortiums back each standard. With an R, you can write once; with RWs you can re-write, just like with the CD version. The big upside of a DVD-R for computer storage, of course, is it holds a lot more than a CD-R. A regular DVD-R on a single side using a single layer can store 4.7GB of data. A 30-pack of Verbatim-brand DVD+RW discs goes now for $23.25


12. Sony Blu-ray

Blu-Ray BD-RE disc
A Blu-ray BD-RE disc

You probably know Blu-ray as the format for buying high-definition movies (with lots of extras) on a disc. It's the format that won the war against Toshiba’s HD-DVD in the aughts, finally giving Sony some justice over what happened with Betamax. It wasn’t originally created for the purpose, but Blu-ray became king of the movie-watching hill...at least until streaming went, uh, mainstream. (But some of us still prefer physical media.)

Recordable (BD-R) or Recordable Erasable (BD-RE) Blu-ray discs have been available since at least 2005, assuming you have the right kind of drive that can handle 45Mbps write speed. The standard disk capacity is 25GB or 50GB depending on whether it's single- or dual-layer.


13. Solid-State Drive (SSD)

Samsung Solid State Drive
Samsung Solid State Drive

The first SSD appeared in 1991, but it took a few decades for the tech to go mainstream. It's essentially like flash drive memory, on a grander scale of capacity, and using semiconductor cells for non-volatile storage. SSDs work in a PC like HDDs, but without any of the moving parts that spell eventual doom. And SSDs are a lot faster, making them perfect for booting up an operating system.

SSDs often accompany HDDs in lower-cost PCs, and increasingly the SSD is the only drive on board. Plus, there are many external SSD options. SSDs also make great upgrades for PCs that need a new drive, even laptops, thanks to the small “gumstick” M.2 format. You can read more about them in SSD vs. HDD: What’s the Difference, and our sister site ExtremeTech has a deep dive on how SSDs work.


Want to see some stranger storage? Check out 10 Bizarre PC Storage Formats that Didn’t Quite Cut It.

Wed, 03 Aug 2022 05:01:00 -0500 en-gb text/html https://uk.pcmag.com/storage/141885/from-525-inch-floppies-to-solid-state-the-evolution-of-pc-storage-media
Killexams : IBM Expands Its Power10 Portfolio For Mission Critical Applications

It is sometimes difficult to understand the true value of IBM's Power-based CPUs and associated server platforms. And the company has written a lot about it over the past few years. Even for IT professionals that deploy and manage servers. As an industry, we have become accustomed to using x86 as a baseline for comparison. If an x86 CPU has 64 cores, that becomes what we used to measure relative value in other CPUs.

But this is a flawed way of measuring CPUs and a broken system for measuring server platforms. An x86 core is different than an Arm core which is different than a Power core. While Arm has achieved parity with x86 for some cloud-native workloads, the Power architecture is different. Multi-threading, encryption, AI enablement – many functions are designed into Power that don’t impact performance like other architectures.

I write all this as a set-up for IBM's announced expanded support for its Power10 architecture. In the following paragraphs, I will provide the details of IBM's announcement and give some thoughts on what this could mean for enterprise IT.

What was announced

Before discussing what was announced, it is a good idea to do a quick overview of Power10.

IBM introduced the Power10 CPU architecture at the Hot Chips conference in August 2020. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. Power10 is developed on the opensource Power ISA. Power10 comes in two variants – 15x SMT8 cores and 30x SMT4 cores. For those familiar with x86, SMT8 (8 threads/core seems extreme, as does SMT4. But this is where the Power ISA is fundamentally different from x86. Power is a highly performant ISA, and the Power10 cores are designed for the most demanding workloads.

One last note on Power10. SMT8 is optimized for higher throughput and lower computation. SMT4 attacks the compute-intensive space with lower throughput.

IBM introduced the Power E1080 in September of 2021. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. The E1080 is a system designed for mission and business-critical workloads and has been strongly adopted by IBM's loyal Power customer base.

Because of this success, IBM has expanded the breadth of the Power10 portfolio and how customers consume these resources.

The big reveal in IBM’s exact announcement is the availability of four new servers built on the Power10 architecture. These servers are designed to address customers' full range of workload needs in the enterprise datacenter.

The Power S1014 is the traditional enterprise workhorse that runs the modern business. For x86 IT folks, think of the S1014 equivalent to the two-socket workhorses that run virtualized infrastructure. One of the things that IBM points out about the S1014 is that this server was designed with lower technical requirements. This statement leads me to believe that the company is perhaps softening the barrier for the S1014 in data centers that are not traditional IBM shops. Or maybe for environments that use Power for higher-end workloads but non-Power for traditional infrastructure needs.

The Power S1022 is IBM's scale-out server. Organizations embracing cloud-native, containerized environments will find the S1022 an ideal match. Again, for the x86 crowd – think of the traditional scale-out servers that are perhaps an AMD single socket or Intel dual-socket – the S1022 would be IBM's equivalent.

Finally, the S1024 targets the data analytics space. With lots of high-performing cores and a big memory footprint – this server plays in the area where IBM has done so well.

In addition, to these platforms, IBM also introduced the Power E1050. The E1050 seems designed for big data and workloads with significant memory throughput requirements.

The E1050 is where I believe the difference in the Power architecture becomes obvious. The E1050 is where midrange starts to bump into high performance, and IBM claims 8-socket performance in this four-socket socket configuration. IBM says it can deliver performance for those running big data environments, larger data warehouses, and high-performance workloads. Maybe, more importantly, the company claims to provide considerable cost savings for workloads that generally require a significant financial investment.

One benchmark that IBM showed was the two-tier SAP Standard app benchmark. In this test, the E1050 beat an x86, 8-socket server handily, showing a 2.6x per-core performance advantage. We at Moor Insights & Strategy didn’t run the benchmark or certify it, but the company has been conservative in its disclosures, and I have no reason to dispute it.

But the performance and cost savings are not just associated with these higher-end workloads with narrow applicability. In another comparison, IBM showed the Power S1022 performs 3.6x better than its x86 equivalent for running a containerized environment in Red Hat OpenShift. When all was added up, the S1022 was shown to lower TCO by 53%.

What makes Power-based servers perform so well in SAP and OpenShift?

The value of Power is derived both from the CPU architecture and the value IBM puts into the system and server design. The company is not afraid to design and deploy enhancements it believes will deliver better performance, higher security, and greater reliability for its customers. In the case of Power10, I believe there are a few design factors that have contributed to the performance and price//performance advantages the company claims, including

  • Use Differential DIMM technology to increase memory bandwidth, allowing for better performance from memory-intensive workloads such as in-memory database environments.
  • Built-in AI inferencing engines that increase performance by up to 5x.
  • Transparent memory encryption performs this function with no performance tax (note: AMD has had this technology for years, and Intel introduced about a year ago).

These seemingly minor differences can add up to deliver significant performance benefits for workloads running in the datacenter. But some of this comes down to a very powerful (pardon the redundancy) core design. While x86 dominates the datacenter in unit share, IBM has maintained a loyal customer base because the Power CPUs are workhorses, and Power servers are performant, secure, and reliable for mission critical applications.

Consumption-based offerings

Like other server vendors, IBM sees the writing on the wall and has opened up its offerings to be consumed in a way that is most beneficial to its customers. Traditional acquisition model? Check. Pay as you go with hardware in your datacenter? Also, check. Cloud-based offerings? One more check.

While there is nothing revolutionary about what IBM is doing with how customers consume its technology, it is important to note that IBM is the only server vendor that also runs a global cloud service (IBM Cloud). This should enable the company to pass on savings to its customers while providing greater security and manageability.

Closing thoughts

I like what IBM is doing to maintain and potentially grow its market presence. The new Power10 lineup is designed to meet customers' entire range of performance and cost requirements without sacrificing any of the differentiated design and development that the company puts into its mission critical platforms.

Will this announcement move x86 IT organizations to transition to IBM? Unlikely. Nor do I believe this is IBM's goal. However, I can see how businesses concerned with performance, security, and TCO of their mission and business-critical workloads can find a strong argument for Power. And this can be the beginning of a more substantial Power presence in the datacenter.

Note: This analysis contains insights from Moor Insights & Strategy Founder and Chief Analyst, Patrick Moorhead.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Wed, 13 Jul 2022 12:00:00 -0500 Matt Kimball en text/html https://www.forbes.com/sites/moorinsights/2022/07/14/ibm-expands-its-power10-portfolio-for-mission-critical-applications/
Killexams : Tillis 101 bill not perfect but still ‘major milestone’: in-house

South Korean lawyers welcome the trademark guidelines but say the appellate board, courts, and other IP offices may not necessarily agree with the KIPO

Lawyers for Craig Wright will seek approval for expert evidence to help the England and Wales High Court understand how autism affects his character

IP counsel say rude judges can dent their confidence but that the effect on clients should not be underestimated

Sources say the Supreme Court’s decision to take on Sky v SkyKick puts uncertainty back into the mix, just when IP owners thought they knew what was what

Charles Feng, partner at East & Concord in Beijing, explains why filing for a trademark early is still a brand’s best bet

Clearance searches are especially important when counsel can’t rely on the USPTO’s opinion before key deadlines, say sources

VLSI case halted in Delaware; Netflix sues Bridgerton rip-off; Ex-GSK scientist escapes damages; US Copyright Office debuts new software; Abbvie scores Humira patent thicket win; Russia tables bill on illegal blocking of copyrighted content

An England and Wales High Court judgment over a disclosure error shows why law firms must never play a distant role when advising clients

Italy is in pole position to replace London as a central division host but must weather a political storm first

The senator’s proposed Section 101 legislation would cut down on exceptions to patent eligibility in the US

Fri, 05 Aug 2022 05:15:00 -0500 en text/html https://www.managingip.com/article/2agb6ybpkwethvz2ig3k0/tillis-101-bill-not-perfect-but-still-major-milestone-in-house
Killexams : IBM Watson Gets a Factory Job

IBM has launched an Internet of Things system as part of Watson. The tools is called Cognitive Visual Inspection, and the idea is to provide manufacturers with a “cognitive assistant” on the factory floor to minimize defects and increase product quality. According to IBM, in early production-cycle testing, Watson was able to reduce 80% of the inspection time while reducing manufacturing defects by 7-10%.

The system uses an ultra-high definition camera and adds cognitive capabilities from Watson to create a tool that captures images of products as they move through production and assembly. Together with human inspectors, Watson recognizes defects in products, including scratches or pinhole-size punctures.

“Watson brings its cognitive capabilities to image recognition,” Bret Greenstein, VP of IoT at IBM, told Design News. “We’re applying this to a wide range of industries, including electronics and automotive.”

The Inspection Eye That Never Tires

The system continuously learns based on human assessment of the defect classifications in the images. The tool was designed to help manufacturers achieve specialization levels that were not possible with previous human or machine inspection. “We created a system and workflow to feed images of good and bad products into Watson and train it with algorithms,” said Greenstein. “This is a system that you can be trained in advance to see what acceptable products look like.”

According to IBM, more than half of product quality checks involve some form of visual confirmation. Visual checking helps ensure that all parts are in the correct location, have the right shape or color or texture, and are free from scratches, holes or foreign particles. Automating these visual checks is difficult due to volume and product variety. Add to that the challenge from defects that can be any size, from a tiny puncture to a cracked windshield on a vehicle.

Some of the inspection training precedes Watson’s appearance on the manufacturing line. “There are several components. You define the range of images, and feed the images into Watson. When it produces the confidence level you need, you push it to the operator stations,” said Greenstein. “Watson concludes whether the product awesome or defective. You let the system make the decision.”

The ultimate goal is to keep Watson on a continuous learning curve. “We can push this system out to different manufacturing lines, and we can train it based on operators in the field and suggest changes to make the system smarter, creating an evolving inspection process,” said Greenstein.

The ABB Partnership

As part of its move into the factory, IBM has formed a strategic collaboration with ABB. The goal is to combine ABB’s domain knowledge and digital solutions with IBM’s artificial intelligence and machine-learning capabilities. The first two joint industry solutions powered by ABB Ability and Watson were designed to bring real-time cognitive insights to the factory floor and smart grids.

READ MORE ARTICLES ON SMART MANUFACTURING:

The suite of solutions developed by ABB and IBM are intended to help companies Excellerate quality control, reduce downtime, and increase speed and yield. The goal is to Excellerate on current connected systems that simply gather data. Instead, Watson is designed to use data to understand, sense, reason, and take actions to help industrial workers to reduce inefficient processes and redundant tasks.

According to Greenstein, Watson is just getting its industry sea legs. In time, the thinking machine will take on increasing industrial tasks. “We found a wide range of uses. We’re working with drones to look at traffic flows in retail situation to analyze things that are hard to see from a human point of view,” said Greenstein. “We’re also applying Watson’s capabilities to predictive maintenance.

Rob Spiegel has covered automation and control for 17 years, 15 of them for Design News. Other Topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

Image courtesy of IBM

Thu, 28 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/automation-motion-control/ibm-watson-gets-factory-job
Killexams : IBM Bids Farewell to Watson Health Assets

IBM shook up the digital health space Friday with the news that it is selling its healthcare data and analytics assets, currently part of the Watson Health business, to an investment firm. The sale price is reportedly more than $1 billion, although the companies are not officially disclosing the financial terms.

There are a lot of interesting factors to consider as we unpack this news, although some thought leaders say the divestiture did not come as a surprise.

“The Watson Health sale has been anticipated for quite some time. IBM was clearly not gaining much traction in the healthcare market while others such as Google and Microsoft have pulled ahead. Even Oracle has made a big splash in healthcare with its exact announcement to acquire Cerner," said Paddy Padmanabhan, founder and CEO of Damo Consulting, a growth strategy and digital transformation advisory firm that works with healthcare and technology companies.

IBM was one of the first big tech companies to dive into healthcare with its well-known Watson Health supercomputer known for defeating the greatest champions on “Jeopardy!" The platform created a lot of buzz back in 2011, and many people had high hopes for the platform's potential applications in healthcare. In exact years, however, that buzz has significantly died down.

"In the current competitive landscape, IBM would not be considered a significant player in healthcare. Selling off the data assets essentially means an end to the Watson Health experiment; however, it may allow IBM as an organization to refocus and develop a new approach to healthcare,” Padmanabhan said.

Assuming there are no regulatory snags, the deal is expected to close in the second quarter of this year.

“Today’s agreement with Francisco Partners is a clear next step as IBM becomes even more focused on our platform-based hybrid cloud and AI strategy,” said Tom Rosamilia, senior vice president of IBM Software. “IBM remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT. Through this transaction, Francisco Partners acquires data and analytics assets that will benefit from the enhanced investment and expertise of a healthcare industry focused portfolio.”

The agreement calls for the current management team to continue in similar roles in the new standalone company, serving existing clients in life sciences, provider, imaging, payer and employer, and government health and human services sectors.

“We have followed IBM’s journey in healthcare data and analytics for a number of years and have a deep appreciation for its portfolio of innovative healthcare products,” said Ezra Perlman, co-president at Francisco Partners. “IBM built a market-leading team and provides its customers with mission critical products and outstanding service.”

In 2016 IBM doubled the size of its Watson Health business through the $2.6 billion acquisition of Truven Health Analytics. Truven offers healthcare data services targeted at employers, hospitals, and drug companies, and makes software that can parse through millions of patient records. Truven's main offices are in Ann Arbor, MI, Chicago, and Denver. At the time of the acquisition, Truven had around 2,500 employees.

The Truven deal followed other major healthcare acquisitions in the company, including Cleveland-based Explorys, Dallas-based Phytel, and Chicago-based Merge Healthcare. The company paid about $1 billion for Merge.

IBM said the assets acquired by Francisco Partners include extensive and diverse data sets and products, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex, and imaging software offerings.

Padmanabhan said it will be interesting to see how the new owners are able to leverage those data assets.

“IBM’s decision to sell its data assets is an indication that it’s not just enough to have the data. Applying advanced analytics on the data to generate insights that can make a difference in real-world applications is where the true value lies. IBM had several missteps early on, especially in cancer care applications, that created significant setbacks for the business that they could not recover from.

In 2018, the Watson Health business went through a round of layoffs. The company declined to tell MD+DI at the time how many of employees were let go other than to say it was a "small percentage" of the global business, but online commenters on TheLayoff.com and Watching IBM, along with multiple news reports citing unnamed sources from within the organization painted a different picture of the situation. One Dallas-based commenter on TheLayoff.com said that "we all knew it was coming but nobody expected it to be this fast and rampant," while another commenter estimated that 80% of that same Dallas-based office was let go.

Is healthcare just too hard for big tech?

While we have seen a trend in exact years with big tech firms showing an interest in healthcare, some of those companies are finding those efforts to be easier said than done. 
 
“IBM’s decision to sell the Watson Health assets is another instance of a big tech firm acknowledging the challenges of the healthcare space. Last year, Google and Apple had significant setbacks, and Amazon has acknowledged challenges in scaling its Amazon Care business," Padmanabhan said. "In IBM’s case, they have missed out on the cloud opportunity and have lagged behind peers in emerging technology areas such as voice. While IBM’s challenges with Watson Health may have been unique to the organization, the fact is that big tech firms have multiple irons in the fire at any time, and for some healthcare may just be too hard.”

Padmanabhan does not think, however, that IBM's decision to sell the Watson Health assets is an indictment of the promise of AI in healthcare.

"Our research indicates AI was one of the top technology investments for health systems in 2021," he said. "Sure, there are challenges such as data quality and bias in the application of AI in the healthcare context, but by and large there has been progress with AI in healthcare. The emergence of other players, notably Google with its Mayo Partnership, or Microsoft with its partnership with healthcare industry consortium Truveta are strong indicators of progress."
 
Padmanabhan is co-author with Edward W. Marx, of Healthcare Digital Transformation: How Consumerism, Technology and Pandemic are Accelerating the Future (2020), and the host of The Big Unlock, a podcast focusing on healthcare digital transformation.

Tue, 02 Aug 2022 12:00:00 -0500 en text/html https://www.designnews.com/industry/ibm-bids-farewell-watson-health-assets
Killexams : Skiff Review Thu, 28 Jul 2022 00:53:00 -0500 en text/html https://www.pcmag.com/reviews/skiff Killexams : Average data breach costs reach an all-time high

In brief: The average cost of an enterprise data breach has reached an all-time high and more often than not, companies raise the price of products or services after a breach to make up for the loss.

In its annual Cost of a Data Breach Report, IBM Security said the global average cost of a data breach is $4.35 million. That's an increase of 2.6 percent from $4.24 million last year and is up 12.7 percent from $3.86 million in the 2020 report. Worse yet, 60 percent of organizations that participated in the study said decisions to raise prices were directly related to security breaches.

Note that this is only the average. Looking at the outliers, we see that those operating in healthcare experienced the costliest breaches for the 12th year in a row with a record average of $10.1 million per incident.

Few will probably be surprised to learn that 83 percent of organizations have experienced more than one data breach in their lifetime. This is no doubt due in part to the fact that 62 percent of those studied felt they are not sufficiently staffed to meet their security needs.

As for attack vectors, IBM noted that 19 percent of breaches resulted from stolen or compromised credentials. Phishing campaigns led to 16 percent of incidents and were the costliest, leading to an average breach cost of $4.91 million. Misconfigured cloud servers caused 15 percent of breaches.

Speaking of the cloud, the study further found that 45 percent of breaches occurred in the cloud. Hybrid cloud environments experienced the lowest average breach cost at $3.8 million compared to organizations using public or private models at $5.02 million and $4.24 million on average, respectively.

Another interesting metric involves ransomware. Businesses that paid ransom demands reported an average of $610,000 less in breach costs compared to those that decided not to pay, but that figure didn't include the ransom amount paid. When factoring in last year's average ransom of $812,360, the pendulum swings the other way and businesses that complied with ransom demands ended up paying more overall in breach costs.

IBM commissioned Ponemon Institute to study 550 organizations across 17 countries and 17 industries between March 2021 and March 2022 to gather data for the report.

Image credit: Pixabay

Wed, 27 Jul 2022 10:03:00 -0500 Shawn Knight en-US text/html https://www.techspot.com/news/95446-average-data-breach-costs-reach-all-time-high.html
Killexams : Twitter Account Hacked? Even Security Companies Have Trouble Getting Back In

The regular reports from antivirus testing companies around the world are extremely helpful when I’m evaluating a new or updated antivirus program. I know all the players, so receiving an email from a lab’s executive team is no surprise, but the request in one such exact email was unusual. Andreas Marx, CEO and co-founder of AV-Test Institute(Opens in a new window), wanted to know if I had any inside contacts at Twitter. It turned out that AV-Test Institute's main Twitter handle, @avtestorg(Opens in a new window), had been hacked, and his attempts to get help from Twitter were going unanswered.

How could this happen in a company with more than 15 years of experience in the security industry? Speaking with Marx and with Maik Morgenstern, technical director of AV-Test and its other CEO, I learned that even when you do everything right, you can still get hacked. As of this writing, the AV-Test account is still posting and retweeting random NFT spam, rather than providing support for AV-Test’s business and its customers.

After account takeover, Twitter feed is replaced by spam

After an account takeover, a Twitter feed is replaced by spam.


Neil J. Rubenking: How did you first learn the account was hacked?
Andreas Marx
: I got a WhatsApp message from a well-known security researcher, just about 10 minutes after the account was hacked on July 25, with screenshots of the compromised Twitter account. Shortly thereafter, we got further notifications from other parties.

What was your first reaction to the hack?
Well, I tried to log in to my mobile device with the Twitter account, but the @avtestorg account was no longer accessible. I tried to check the account on my PC, but I was not able to log in and just saw the compromised Twitter account there, too. (Twitter actually asked me to create a new account!)

In my email Inbox, I saw three mails from Twitter, all in Russian. One e-mail message from Twitter said, "Пароль был изменён" ("Password has been changed") with the information "Недавно вы изменили пароль своей учетной записи @avtestorg." ("You recently changed your @avtestorg account password."). Just two minutes later, this email message arrived: "Адрес электронной почты для @avtestorg изменен" ("Email address for @avtestorg changed"). It said to confirm by following a link sent to the new email and ended, “If you haven't made these changes, please contact Twitter support immediately."

Password change warning in Russian

Password change warning in Russian (Credit: PCMag)

I'm a German, and I've used Twitter in German language for the last decade, so it appears to me that someone changed the default language first.

To my surprise, the new email address for the account was blanked out (not fully visible), and I saw the message that only the new address needs to be confirmed. So, Twitter doesn't even ask if the person behind the current email address agrees with the account change.

What techniques did you use to try regaining access?
We immediately contacted the Twitter support and opened a case, “Regain access - Hacked or compromised," providing all details to reclaim our account. When nothing happened after two days we filed another case, with the same result so far: nothing.

What does Twitter recommend in a case like this?
Twitter suggests you contact their support via the website "I’m having problems with account access(Opens in a new window)."

What was Twitter’s response?
There is no response from Twitter so far, neither from the initial report via the website, nor from a second request two days later. We also tried to contact the support via @TwitterSupport, and tried to contact Twitter via email.

Well, “no response” is not entirely true. I've received a response from a bot who asked me, "Twitter would like your feedback. It should only take 2 minutes!" but that's from a third party.

What did you learn from this experience?
I have to admit that I'm still feeling totally lost. More than one week has passed by, and there has been no reaction. I actually expected a response from Twitter after my reports somehow, as the changes to the account and the postings are very unusual. At least the account should have been blocked in the short term, until further verification. The account is still there, and we have no access to it, so it might still be in use by the malicious actors.

Any advice for others to protect their Twitter accounts?
We used a strong password and 2FA (two-factor authentication) for protecting the account, but it looks like this was not enough. Maybe the attacker hasn't stolen the password, but taken over an active session, so they were already logged in and most of the security features are disabled then. I still don't understand why changing the email account wouldn't trigger a 2FA request. That's definitely a weakness of Twitter; other social networks handle this much better.

Recommended by Our Editors

My strong recommendation is actually for Twitter, not for other users. Before changing an email address for an account, please ensure that the current person behind this email address agrees to the transfer. For many other websites and social media platforms, a confirmation link or code is sent before the account can be transferred, or another form of 2FA is required to ensure that the account cannot easily be hijacked.

 And, Twitter, please be kind and respond to messages.


When even the experts can’t prevent an account takeover, you may figure that you’re just out of luck. In truth, there’s quite a bit you can do to make sure your Twitter account and other important accounts remain secure. Start with the basics. If you don’t already have a password manager, get one. Use it to change the passwords for your sensitive accounts to something unique and random. Don’t worry; the password manager remembers them for you.

Even though the hackers in this story seem to have done an end-run around multi-factor authentication, that doesn’t mean it’s not valuable. When you engage multi-factor for your important accounts, you make it a lot harder for anyone to hack into them. Chances are good that a random hacker will skip your account and go for something easier, like an account that has a password of “password” with no added authentication.

You can log out of all other Twitter sessions

(Credit: PCMag)

Marx mentioned that the hacker might have gained access through an active, unlocked Twitter session. You can help your security by always logging out when you’re done using Twitter, or at least making sure your computers and smart devices are thoroughly secured. You can also view active and past sessions directly from your Twitter account and click a simple link to shut down all sessions except your current one.

So, what are you waiting for? Log into your Twitter account right now and make sure you have multi-factor authentication protecting it. Check those other sessions—if any of them look wonky, pull the plug and shut 'em all down. And be sure you're protecting that account with a strong password, not your birthday or your dog's name.

Like What You're Reading?

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Wed, 03 Aug 2022 03:25:00 -0500 en text/html https://www.pcmag.com/news/twitter-account-hacked-even-security-companies-have-trouble-getting-back
Killexams : Platforms are the Future: The Rise of the Platform Solution – Geoffrey Cann

the rise of the platform solution geoffrey cannn

By Geoffrey Cann

Platform solutions save time and money and Excellerate business decision quality. And now there are even specific platforms for oil and gas.


I’ve reached into the archives for this article, which was originally published on August 16, 2021.

The Origins of Platforms

By platform, I don’t mean off-shore—I mean a computerized business system.

I didn’t appreciate it at the time, but my first exposure to platform thinking was in 1984 on my first corporate job at a big oil company where I supported a computer system called CORPS. It was written in PL/1 (a programming language) and ran on an IBM S360 TSO mainframe system at 1 am (yes, I carried a pager and frequently was jolted awake at 2 am when the system didn’t run correctly, usually my fault). Big companies love abbreviations—I can’t recall what exactly CORPS stood for (Corporate Reporting System, possibly) but I remember precisely why it existed: to save data center time and cost in mounting, spinning and dismounting magnetic tapes.

CORPS was a kind of middleman system. At the time, in an era before SAP, this big oil company had a handful of major commercial business systems that handled different aspects of petroleum product movement (purchases, sales, volumes, lifts, loadings, inventories for wholesale, retail, commercial and industrial markets) which individually fed data to many other systems (margin analysis, financials, customer accounts). The dozens of individual data feeds from one system to another (load magnetic tape on tape drive, copy data to another tape, dismount tapes) created a literal Gordian Knot of integrations. A failure in any one brought the whole system to its knees.

CORPS was an attempt to solve for this problem—all the inputs fed into one gigantic master file which then transformed the data, and generated all the individual data feeds. It was dramatically faster than running all the data transfers individually.

With hindsight, CORPS solved a many-to-many problem. Many data inputs going to many data outputs creates huge cost as each data supplier needs to maintain an individual connection with each data consumer. A change to any one system can have a ripple effect on many other systems.

Modern digital platforms are very good at solving the many-to-many problem. For example, trading platforms facilitate buyers and sellers to find one another and transact. Amazon matches many suppliers of goods with many customers. AirBnB matches parties holding available accommodations with travellers needing a short term place to stay. Uber matches cars and drivers with customers. Platforms can often capture network effects by connecting very large numbers of counterparties.

Solve for Many to Many

As with CORPS, economic impacts are magnified when the problem of many distinct data sources feeding many data consumers is solved. Upstream oil and gas features a rich diversity of commercial software packages solving for very specific analytic problems, which results in an abundance of data sources. In addition, engineers rely on ERP systems, land systems, mapping software, paper binders, PDFs, data lakes, spreadsheets and historians to supply data. Other data sources include sensors, robots and edge computing devices, web services and RSS feeds, and SCADA systems.

Resources businesses also have numerous unique internal uses for the data (such as well planning, capital budgeting, geologic analysis, environmental studies, environmental reporting, compliance and financials) which can take the form of spreadsheets, commercial software products, analytics and visualisation software, and increasingly, machine learning algorithms and machines directly.

But what truly distinguishes platform solutions from other solutions are the presence of features on the platform that allow platform users to define their own ways of doing business. Here are a handful of the kinds of features that should be on the shortlist of criteria for choosing a platform for your upstream business.

ENABLE RAPID AND FLEXIBLE EXPANSION

The software code that enables access to the data in a data source (such as a spreadsheet) is generalised so that it can access data from any similar data source (another spreadsheet) and has a user interface that does not require the user to be the original programmer. These application programming interfaces (or APIs) are the magic that make platforms super powerful—once written APIs are reusable, they provide a kind of insulation between the data source (which can change on its own schedule) and the consumer of the data (which can also change). APIs are scalable in their own right.

I look for platforms that feature extensive and constantly growing libraries of these APIs that enable both data sources and data outputs.

RENDER DATA TRULY USEFUL

Holding all of the data from the various sources so that they can be consumed by the users creates a data management problem. CORPS was rather unsophisticated in that regard in that the system simply created a nightly file of master data. Modern platforms deal with this challenge by incorporating their own data base solutions with meta data management, naming conventions, data structures, and the myriad elements required to hold and eventually manipulate petabytes of data.

One of the important features of CORPS was unit harmonization. Some feeder sources predated Canadian adoption of the metric system for weights and volumes, and product volumes were in tons and cubic feet, but reporting systems wanted only metric tonnes and cubic meters. Some measures were corrected at source for thermal expansion (the volume of petroleum expands as it heats up) and others needed correction to a standard. Some dates were in format YYMMDD, and needed to be transformed to YYYYMMDD.

It turns out that data harmonisation is still quite commonplace. Feeder system A refers to a well using its naming standard, but that standard is different from Feeder system B. The platform needs to harmonize the data, provide an audit trail about the changes to the data, and permit traceability forward and backward from feeder to consumer.

These services are called taxonomy, they enable this kind of data sophistication, and like APIs, they are reusable and scalable. Good platforms invest extensively in taxonomy capabilities.

ALLOW FOR RAPID LOW COST GROWTH

One of CORPs problems was its lack of scalability. It only ran on one specifically set up mainframe. Physical devices like tape drives were hard named in the job control language (or JCL). It had no test bed (mainframes are expensive) so changes might not be fully be debugged before they were introduced into production (as I discovered frequently with the 2 am pager). Adding a new interface took 8 weeks of work.

Platforms get around these limitations by designing for scalability. They run on industry standard cloud platforms that offer near infinite growth capacity for both data storage and compute cycles. One of the cloud instances is probably the test bed so that changes can be thoroughly tested out. Instead of Joe Programmer having to imagine a testing plan for every change, they incorporate sophisticated testing tools that run through thousands of different kinds of variations. They use responsive browser technology so that any device anywhere anytime can access the platform.

PROTECT THE ASSET

CORPS had no security. I don’t think cyber was even a thing back then. Yes, there were passwords to get onto systems, but I have no recollection at all of concepts like authentication of a data source, and authorization of a device to supply data. Data was not encrypted because all work took place inside the company computer system, there was no internet access (yet) and encryption imposed a compute cycle cost.

Today, encryption of data from end to end, and other security concepts, are top of mind for managers and leaders. Platform solutions are particularly at risk because they represent a single point of failure. Modern platform systems invest extensively in hardening their systems to repel the inevitable cyber attack.

ENABLE CITIZEN PROGRAMMERS

CORPS offered little value added beyond some data standardisation. Everything was hand-coded by a scarce programmer using an arcane software language (PL/1 in my case). Modern platforms include their own kind of easy to learn programming language so that ordinary users can code up unique and specific solutions to highly nuanced problems.

  • Does your process call for Finance to certify a financial projection (for cash flow reasons) before your engineers can embark on a well work over? No problem, just code up the workflow to your method.
  • Does your monthly reporting package need to transform some late-arriving data on midstream throughput from your gas processor? No problem, just build the routine you need and trigger it to execute on receipt of the data.

Programmability opens up the possibility of third party applications being written for the platform. These applications then unlock new business models (as an app provider, for example) new technology integrations (such as incorporating augmented and virtual reality) new commercial models (using advertising, crypto currency payment methods, or subscriptions) data streaming (for transactions) and new off-platform integrations (such as merging specialised inaccessible datasets). I think of this as akin to the App Store from Apple, which has created billions of dollars of incremental value above the iOS platform.

Advanced platforms build in support for third parties to help evolve the value of the platform, including concepts like app stores, supplier certifications, programming standards, security protocols and data privacy standards.

DEEPEN YOUR ANALYTICS

With so much data at hand, platforms can offer layers of algorithms, models, data science, learning services, visualizations and manipulations that are too expensive for individual feeder systems, or not valuable without the significant data volumes that platforms provide.

Show Me The Money

As we can see from Amazon, Apple, Uber, and AirBnB, platform solutions are the future. They allow companies to implement change much faster. Hidden overhead costs from excessive data handling, hoarding, and manipulation get progressively squeezed out. Decision quality improves because there is more time for insight and analysis. Innovation can blossom because of the high quality of underlying data. New technologies such as machine learning and augmented reality finally have a fighting chance to succeed. New business models emerge.

A Working Example

For a great working example of a platform in action, check out Datagration and their PetroVisor platform. It fulfils all of the kinds of features that I expect to see in world class platform technology.

Platforms Are The Future

The roots of platform systems date back many years, but today’s versions are simply too valuable to ignore, and with the pressures to solve for problems like carbon tracing, industry specific platforms will grow in prominence and affordability.


Check out my latest book, ‘Carbon, Capital, and the Cloud: A Playbook for Digital Oil and Gas’, available on Amazon and other on-line bookshops.

You might also like my first book, Bits, Bytes, and Barrels: The Digital Transformation of Oil and Gas’, also available on Amazon.

Take Digital Oil and Gas, the one-day on-line digital oil and gas awareness course on Udemy.

Biz card: Geoffrey Cann on OVOU
Mobile: +1(587)830-6900
email: geoff@geoffreycann.com
website: geoffreycann.com
LinkedIn: www.linkedin.com/in/training-digital-oil-gas

Share This:


More News Articles

Mon, 25 Jul 2022 05:18:00 -0500 en-US text/html https://energynow.ca/2022/07/the-rise-of-the-platform-solution-geoffrey-cann-2/
000-M191 exam dump and training guide direct download
Training Exams List