Precisely same NS0-520 questions as in real test, Amazing!

Download Free NS0-520 dumps to ensure that you would understand NS0-520 braindumps well. Then apply for full copy of NS0-520 questions and answers with VCE exam simulator. Memorize NS0-520 PDF questions, practice with VCE exam simulator and feel confident that you will get high score in actual NS0-520 exam.

Exam Code: NS0-520 Practice exam 2022 by team
NetApp Certified Implementation Engineer SAN, ONTAP
Network-Appliance Implementation plan
Killexams : Network-Appliance Implementation plan - BingNews Search results Killexams : Network-Appliance Implementation plan - BingNews Killexams : 20 Years Of Copyright Wars

Image: Gizmodo/Shutterstock (Shutterstock)

Gizmodo is 20 years old! To celebrate the anniversary, we’re looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools.

In the revisionist history of the internet, we were all sold down the river by the “techno-optimists,” who assumed that once we “connected every person in the world” we’d enter a kind of post-industrial nirvana, a condition so wondrous that it behooved us all to “move fast,” even if that meant that we’d “break things.”

The problem is that this is the Facebook story, not the internet story, and notwithstanding decades of well-financed attempts at enclosure, “Facebook” is not “the internet” (also, it’s not even Facebook’s story: “connecting every person in the world” was always a euphemism for “spy on every person in the world in order to enrich our shareholders”).

20 years ago, the internet’s early adopters were excited about the digital world’s potential, but we were also terrified about how it could go wrong.

Take the Electronic Frontier Foundation, an organization that hired me 20 years ago, just a few months after the large record companies filed suit against the P2P file-sharing service Napster, naming its investors and their backers (giant insurance companies and pension funds) to the suit.

The Copyright Wars were kicking off. Napster showed that the record industry’s plan to capitalize on the internet with another “format shift” was doomed: we may have re-bought our vinyl on 8-track, re-bought our 8-tracks on cassette, and re-bought our cassettes on CD, but now we were in the driver’s seat. We were going to rip our CDs, make playlists, share the tracks, and bring back the 80 percent of recorded music that the labels had withdrawn from sale, one MP3 at a time.

The rate of Napster’s ascent was dizzying. In 18 short months, Napster attracted 52 million users, making it the fastest-adopted technology in history, surpassing DVD players. For comparison, this was shortly after the 2000 presidential elections, in which 50,999,897 votes were cast for the loser (the “winner” got 50,456,002 votes).

Napster’s fall was just as dizzying. In July 2001, the service shut down after a 25-month run. A month later, it declared bankruptcy. The labels’ attempt to drag Napsters’ VCs and their backers failed, but it didn’t matter –the investor class got the message, and P2P file-sharing became balance-sheet poison.

P2P users didn’t care. They just moved from “platforms” to “protocols”, switching to increasingly decentralized systems like Gnutella and BitTorrent – systems that, in turn, eliminated their own central points of failure in a relentless drive to trackerlessness.

P2P users interpreted the law as damage and routed around it. What they didn’t do, for the most part, was develop a political consciousness. If “P2P users” were a political party, they could have elected a president. Instead, they steered clear of politics, committing the original sin of nerd hubris: “Our superior technology makes your inferior laws irrelevant.”

P2P users may not have been interested in politics, but politics was interested in P2P users. The record industry sued 19,000 kids, singling out young P2P developers for special treatment. For example, there was the college Computer Science major who maintained a free software package called FlatLAN, that indexed the shared files on any local network. The labels offered him a settlement: if he changed majors and gave up programming computers, they wouldn’t seek $150,000 in statutory damages for each track in his MP3 collection.

This phase of the P2P wars was a race between civil disobedience and regulatory capture. Senate Commerce Chairman Fritz Hollings introduced a bill that would make it a crime to sell a computer unless it had a special copyright enforcement chip (wags dubbed this hypothetical co-processor the “Fritz Chip”) that would (somehow) block all unauthorized uses of copyrighted works. The Hollings bill would have required total network surveillance of every packet going in or out of the USA to block the importation of software that might defeat this chip.

The Hollings bill died, but the entertainment industry had a backup plan: the FCC enacted “the Broadcast Flag regulation,” a rule that would require all digital devices and their operating systems to be approved by a regulator who would ensure that they were designed to foil their owners’ attempts to save, copy, or manipulate high-definition digital videos. (EFF subsequently convinced a federal judge that this order was illegal).

Thus did the DRM Wars begin: a battle over whether our devices would be designed to obey us, or police us. The DRM Wars had been percolating for many years, ever since Bill Clinton signed the Digital Millennium Copyright Act into law in 1998.

The DMCA is a complex, multifaceted law, but the clause relevant to this part of history is Section 1201, the “anti-circumvention” rule that makes it a jailable felony to provide tools or information that would help someone defeat DRM (“access controls for copyrighted works”). DMCA 1201 is so broadly worded that it bans removing DRM even when it does not lead to copyright infringement. For example, bypassing the DRM on a printer-ink cartridge lets you print using third-party ink, which is in no way a violation of anyone’s copyright, but because you have to bypass DRM to do it, anyone who gives you a printer jailbreaking tool risks a five-year prison sentence and a $500,000 fine…for a first offense.

DMCA 1201 makes it illegal to remove DRM. The Hollings Bill and the Broadcast Flag would have made it a crime to sell a device unless it had DRM. Combine the two and you get a world where everything has DRM and no one is allowed to do anything about it.

DRM on your media is gross and terrible, a way to turn your media collection into a sandcastle that melts away when the tide comes in. But the DRM Wars are only incidentally about media. The real action is in the integration of DRM in the Internet of Things, which lets giant corporations dictate which software your computer can run, and who can fix your gadgets (this also means that hospitals in the middle of a once-in-a-century pandemic can’t fix their ventilators). DRM in embedded systems also means that researchers who reveal security defects in widely used programs face arrest on federal charges, and it means that scientific conferences risk civil and criminal liability for providing a forum to discuss such research.

As microprocessors plummeted in price, it became practical to embed them in an ever-expanding constellation of devices, turning your home, your car and even your toilet into sensor-studded, always-on, networked devices. Manufacturers seized on the flimsiest bit of “interactivity” as justification for putting their crap on the internet, but the true motivation is to be found in DMCA 1201: once a gadget has a chip, it can have a thin skin of DRM, which is a felony to remove.

You may own the device, but it pwns you: you can’t remove that DRM without facing a prison sentence, so the manufacturer can booby-trap its gizmos so that any time your interests conflict with its commercial imperatives, you will lose. As Jay Freeman says, DMCA 1201 is a way to turn DRM into a de facto law called “Felony Contempt of Business Model.”

The DRM Wars rage on, under many new guises. These days, it’s often called the “Right to Repair” fight, but that’s just a corner of the raging battle over who gets to decide how the digital technology that you rely on for climate control, shelter, education, romance, finance, politics and civics work.

The copyright maximalists cheered DRM on as a means to prevent “piracy,” and dismissed anyone who warned about the dangers of turning our devices into ubiquitous wardens and unauditable reservoirs of exploitable software bugs as a deranged zealot.

It’s a natural enough mistake for anyone who treats networked digital infrastructure as a glorified video-on-demand service, and not as the nervous system of 21st Century civilization. That worldview –that the internet is cable TV for your pocket rectangle– is what led those same people to demand copyright filters for every kind of online social space.

Filtering proposals have been there all along, since the days of the Broadcast Flag and even the passage of the DMCA, but they only came into widespread use in 2007, when Google announced a filtering system for YouTube called Content ID.

Google bought YouTube in 2006, to replace its failing in-house rival Google Video (Google is a buying-things company, not a making-things company; with the exception of Search and Gmail, all its successes are acquisitions, while its made-at-Google alternatives from Glass to G+ to Reader fail).

YouTube attracted far more users than Google Video – and also far more legal trouble. A bruising, multi-billion-dollar lawsuit from Viacom was an omen of more litigation to come.

Content ID was an effort to head off future litigation. Selected media companies were invited to submit the works they claimed to hold the copyright to, and Content ID scoured all existing and new user uploads for matches. Rightsholders got to decide how Content ID handled these matches: they could “monetize” them (taking the ad revenue that the user’s video generated) or they could block them.

Content ID is one of those systems that works well, but fails badly. It has three critical failings:

  1. YouTube is extremely tolerant of false copyright claims. Media companies have claimed everything from birdsong to Brahms without being kicked off the system.
  2. Content ID tolerates false positives. The designers of any audio fingerprinting system have to decide how close two files must be to trigger a match. If the system is too strict, it can be trivially defeated by adding a little noise, slicing out a few seconds of the stream, or imperceptibly shifting the tones. On the other hand, very loose matching creates a dragnet that scoops up a lot of dolphins with the tuna. Content ID is tuned to block infringement even if that means taking down non-infringing material. That’s how a recording of white noise can attract multiple Content ID claims, and why virtually any classical music performance (including those by music teachers) gets claimed by Sony.
  3. It is impossible for Content ID to understand and accommodate fair use. Fair use is a badly understood but vital part of copyright; as the Supreme Court says, fair use is the free expression escape-valve in copyright, the thing that makes it possible to square copyright (in which the government creates a law about who is allowed to publish certain phrases) with the First Amendment (which bars the government from creating such a law). There is no bright-line test for whether something is fair use; rather, there is a large body of jurisprudence and some statutory factors that have to be considered in their totality to determine whether a use is fair. Here are some uses that have been found fair under some circumstances: making copies of Hollywood blockbusters and bringing them to your friends’ house to watch at viewing parties; copying an entire commercial news article into an ad-supported message board so participants can read and discuss it; publishing a commercial bestselling novel that is an unauthorized retelling of another bestseller, specifically for the purpose of discrediting and replacing the original book in the reader’s imagination. Now, these were highly specific circumstances, and I’m not trying to say that all copying is fair, but Google’s algorithms can’t ever make the fine distinctions that created these exceptions, and it doesn’t even try. Instead, YouTube mostly acts as if fair use didn’t exist. Creators whose work is demonetized or removed can argue fair use in their appeals, but the process is beyond baroque and generally discourages fair usage.

Content ID resulted in billions of dollars in revenue for rightholders, but in no way ended copyright infringement on YouTube, as Big Content lobbyists frequently remind us. YouTube spent $100,000,000 (and counting) on the system, which explains why only the largest Big Tech companies, like Facebook, have attempted their own filters.

Copyright filters are derided as inadequate by rightsholder groups, but that doesn’t stop them from demanding more of them. In 2019, the EU erupted in controversy over the Article 13 of the Digital Single Market Act (DMSA), which requires platforms for user-generated content to prevent “re-uploading” of material that has been taken down following a copyright complaint. Artice 13 triggered street demonstrations in cities all over Europe, and a petition opposing Article 13 attracted more signatories than any petition in EU history.

The official in charge of the Article 13 push, a German politician named Axel Voss, repeatedly insisted that its goal of preventing re-uploading could be accomplished without automated filters – after all, the existing E-Commerce Directive banned “general monitoring obligations” and the General Data Protection Regulation (GDPR) bans “processing” of your uploads without consent.

Article 13 came up for a line-item vote in March 2019 and carried by five votes. Afterward, ten Members of the European Parliament claimed they were confused and pressed the wrong button; their votes were switched in the official record, but under EU procedures, the outcome of the (now losing) vote was not changed.

Almost simultaneously, Axel Voss admitted that there was no way this would work without automated filters. This wasn’t surprising: after all, this is what everyone had said all along, including lobbyists for Facebook and YouTube, who endorsed the idea of mandatory filters as a workable solution to copyright infringement.

European national governments are now struggling to implement Article 13 (renumbered in the final regulation and now called Article 17), and when that’s done, there’s a whole slew of future filter mandates requiring implementation, like the terror regulation that requires platforms to identify and block “terrorist” and “extremist” content and keep it down. This has all the constitutional deficiencies of Article 13/17, and even higher stakes, because the false positives that “terrorism filters” take down isn’t white noise or birdsong – it’s the war-crime evidence painstakingly gathered by survivors.

In the USA, the Copyright Office is pondering its own copyright filter mandate, which would force all platforms that allow users to publish text, audio, code or video to compare users’ submissions to a database of copyrighted works and block anything that someone has claimed as their own.

As with Article 17 (née Article 13), such a measure will come at enormous cost. Remember, Content ID cost more than $100 million to build, and Content ID only accomplishes a minute sliver of the obligations envisioned by Article 17 and the US Copyright Office proposal.

Adding more than $100 million to the startup costs of any new online platform only makes sense if your view of the internet is five giant websites filled with screenshots of text from the other four. But if you hold out any hope for a more decentralized future built on protocols, not platforms, then filtering mandates should extinguish it.

Which brings me back to 20 years ago, and the naivete of the techno-optimists. 20 years ago, technology activists understood and feared the possibilities for technological dystopia. The rallying cry back then wasn’t “this will all be amazing,” it was “this will all be great…but only if we don’t screw it up.”

The Napster Wars weren’t animated by free music, but by a free internet – by the principle that we should should be free to build software that let people talk directly to one another, without giving corporations or governments a veto over who could connect or what they could say.

The DRM Wars weren’t about controlling the distribution of digital videos, they were fought by people who feared that our devices would be redesigned to control us, not take orders from us, and that this would come to permeate our whole digital lives so that every appliance, gadget and device, from our speakers to our cars to our medical implants, would become a locus of surveillance and control.

The filter wars aren’t about whether you can upload music or movies – it’s about whether cops can prevent you from sharing videos of their actions by playing pop music in order to trigger filters and block your uploads.

From Napster to DRM to filters, the fight has always had the same stakes: will our digital nervous system be designed to spy on us and boss us around, or will it serve as a tool to connect us and let us coordinate our collective works?

But the techno-optimists – myself included – did miss something important 20 years ago. We missed the fact that antitrust law was a dead letter. Having lived through the breakup of AT&T (which unleashed modems and low-cost long-distance on America, setting the stage for the commercial internet); having lived through the 12-year IBM antitrust investigation (which led Big Blue to build a PC without making its own operating system and without blocking third-party clones of its ROMs); having lived through Microsoft’s seven-year turn in the antitrust barrel (which tamed Microsoft so that it spared Google from the vicious monopoly tactics that it used to destroy Netscape); we thought that we could rely on regulators to keep tech fair.

That was a huge mistake. In reality, by 1982, antitrust law was a dead man walking. The last major action of antitrust enforcers was breaking up AT&T. They were too weak to carry on against IBM. They were too weak to prevent the “Baby Bells” that emerged from AT&T’s breakup from re-merging with one another. They were even too weak to win their slam-dunk case against Microsoft.

That was by design. Under Reagan, the business lobby’s “consumer welfare” theory of antitrust (which holds that monopolies are actually “efficient” and should only be challenged when there is mathematical proof that a merger will drive up prices) moved from the fringes to the mainstream. Nearly half of all US Federal judges attended cushy junkets in Florida where these theories were taught, and afterwards they consistently ruled in favor of monopolies.

This process was a slow burn, but now, in hindsight, it’s easy to see how it drastically remade our entire economy, including tech. The list of concentrated industries includes everything from eyeglasses to glass bottles, shipping to finance, wrestling to cheerleading, railroads to airlines, and, of course, tech and entertainment.

40 years after the neutering of antitrust, it’s hard to remember that we once lived in a world that barred corporations from growing by buying their small, emerging competitors, merging with their largest rivals, or driving other businesses out of the marketplace with subsidized “predatory pricing.”

Yet if this regime had been intact for the rise of tech, we’d live in a very different world. Where would Google be without the power to gobble up small competitors? Recall that Google’s in-house projects (with the exception of Search and Gmail) have either failed outright or amounted to very little, and it was through buying up other businesses that Google developed its entire ad-tech stack, its mobile platform, its video platform, even its server infrastructure tools.

Google’s not alone in this – Big Tech isn’t a product-inventing machine, it’s a company-buying machine. Apple buys companies as often as you buy groceries. Facebook buys companies specifically to wipe out potential competitors.

But this isn’t just a Big Tech phenomenon. The transformation of the film industry – which is now dominated by just four studios – is a story of titanic mergers between giant, profitable companies, and not a tale of a few companies succeeding so wildly that their rivals go bust.

Here is an area where people with legitimate concerns over creators’ falling share of the revenues their labor generates and people who don’t want a half-dozen tech bros controlling the future have real common ground.

The fight to make Spotify pay artists fairly is doomed for so long as Spotify and the major labels can conspire to rip off artists. The fight to get journalists paid depends on ending illegal Google-Facebook collusion to steal ad-revenue from publishers. The fight to get mobile creators paid fairly runs through ending the mobile duopoly’s massive price-gouging on apps.

All of which depends on fighting a new war, an anti-monopoly war: the natural successor to the Napster Wars and the DRM Wars and the Filter Wars. It’s a war with innumerable allies, from the people who hate that all the beer is being brewed by just two companies to the people who are outraged that all the shipping in the world is (mis)managed by four cartels, to the people who are coming to realize that “inflation” is often just CEOs of highly concentrated industries jacking up prices because they know that no competitor will make them stop.

The anti-monopoly war is moving so swiftly, and in so many places, that a lot of combatants in the old tech fights haven’t even noticed that the battleground has shifted.

But this is a new era, and a new fight, a fight over whether a world where the line between “offline” and “online” has blurred into insignificance will be democratically accountable and fair, or whether it will be run by a handful of giant corporations and their former executives who are spending a few years working as regulators.

All the tech antitrust laws in the world won’t help us if running an online platform comes with an obligation to spend hundreds of millions of dollars to spy on your users and block their unlawful or unsavory speech; nor will reform help us if it continues to be illegal to jailbreak our devices and smash the chains that bind our devices to their manufacturers’ whims.

The Copyright Wars have always been premised on the notion that tech companies should be so enormous that they can afford to develop and maintain the invasive technologies needed to police their users’ conduct to a fine degree. The Anti-Monopoly Wars are premised on the idea that tech and entertainment companies must be made small enough that creative workers and audiences can fit them in a bathtub… and drown them.

Cory Doctorow is a science fiction author, activist and journalist. His next book is Chokepoint Capitalism (co-authored with Rebecca Giblin), on how Big Tech and Big Content rigged creative labor markets - and how to unrig them.

Mon, 08 Aug 2022 08:00:00 -0500 en-us text/html
Killexams : A Strategic Mess
The view looking up through a forest of tall trees

(Photo courtesy of Greg Cousa)

Strategy strategy everywhere.
Strategy strategy, in my hair.
On my road to scale I bumped into End Game, stumbled onto Systems Change, which confused my Theory of Change.
My Mission’s drifting and my Vision’s blurry.
My fundraising “sounds” uplifting yet all I do is worry.

When it comes to developing strategy, it’s easy for a social venture’s leadership team to revert back to the same old strategic planning process they’ve used in the past: Inertia is real and repeating the same process, even if you know it’s not the best, is comfortable and easy. Why does strategy have to be so hard? Well ... because it is. Strategy is hard! It’s about the future, and—whatever any charismatic leader, politician, funder, or Burning Man-attendee might say—the future is murky, messy, confusing, and 100 percent unknown.

However, the strategic process doesn’t have to be. Even with a visionary leader, a strong brand, and clarity on future trends, good strategy will always be translucent, at best; all the indicators in the world can’t make the future fully known. But the approach to developing strategy should be transparent.

Ready? Neither am I. Let’s jump in together.

In my experience—both in research and application with ventures—there are three core, broad phases of developing a sound impact strategy, the 3 As: Assess, Approach, and Action.

Assess is about asking the right questions, inside and outside your organization. The absolute best thing you can do in the Assess phase is get the right questions down, both internal to the organization, and external, looking outward at the market. Then, obviously, going out and getting the best answers you can! In other words, Assess is about taking a step back to look at your organization from a bird’s eye view, in context: Why does our venture exist? What is our organization trying to accomplish all these years? How are we doing? What is going on in our market? Who have been our core clients to serve? What adjacent clients could we serve? Are there partners or competition in our sector, and how are they doing? What do the latest trends tell us about our space?

Approach is about designing the right objectives, hypotheses, and overall approach to why, what, how, when, and where you’re going to scale. The Approach you build should be based on the outputs of the Assess Phase.

Action is Jackie Chan time! It’s about adjusting your organization and detailing your plans, defining, and finding the necessary resources and infrastructure, and assessing the risk and rewards of the Approach. Most importantly, it’s about implementation. Going out and testing your strategic hypotheses, followed by adjusting your strategy based on what you learned. The adjustments should lead to further action: if the test(s) were successful, wider and/or faster execution; if the tests were not successful, contraction and/or pivots.

Strategy should be linear and circular ... but not at the same time. It’s important to follow the above steps chronologically, as each step draws from the previous one and feeds into the next, but it’s just as important to revisit previous phases in light of additional information you gather. For example, you won’t simply review the question “why do we exist?'' once at the beginning of the process; after you’ve tested a strategy in the marketplace, you need to go back and ask, in light of the pilot’s success or failure, whether it supports “why we exist” (or hinders it).

Let’s check out some phenomenal organizations’ journeys I’ve had the privilege of observing.

1. Assess

You absolutely want to start with spending deep time looking in the mirror AND looking at the world with which your venture operates. In this light—looking within and without—the priority is to develop questions to answer, both internal to the organization and external to the sector you want to work in. Some questions will be necessary for most ventures: Why do we exist? What are our Core Capabilities? How successful have we been over the past few years? Why? And some questions will be quite specific to your venture: Do we only want to focus our efforts on smart clinics?

For example, take, an amazing business that develops and sells (among other things) biodigesters, a prefabricated modular package that includes a full suite of biogas appliances and connections. Basically, “organic waste” goes in, and out pops renewable biogas and organic fertilizer. It sounded magical to me, and’s impact to date is just as mind-blowing: Over 180,000 people have access to clean-renewable energy and organic fertilizer; over 347,000 tons of CO2e have been mitigated; over 17,000,000 tons of waste have been treated; and over 330,000 hectares are now using organic biofertilizer ... a year.

The 3 As of Impact Strategy: Assess, Approach, Action

In the early stages,’s scale journey included assessing why they exist, and what makes them unique. However, this year their Assess phase has centered on more nuanced internal and external questions: What’s going on in the carbon-offset market? Should we have a carbon-offset strategy, and if so, what should it look like? How are our B2C and B2B models working in Kenya and India, respectively? What current trends in climate change policy do we see? What future trends do we anticipate in farmer migration? is one of those amazing and somehow-still-under-the-radar ventures that is knocking on the door of profitability while massively impacting thousands and thousands of people around the globe and mitigating Climate Change. Leonardo DiCaprio eat your heart out.

2. Approach

The Approach phase is arguably both the most exciting and the murkiest. It involves taking the outputs of the Assess phase and asking, “Now what are we going to do?” When you’re starting to think about scale, after all, you’ve already nailed down your product or service, you have a sound operating model and relatively stable finances, and you’ve been around the block a few times. You’re now thinking about circling other blocks. In the Approach phase, in other words, you’re thinking through big options: Do we scale geographically? Should we diversify the clients we serve? Should we innovate on our core products/services to go deeper and/or wider? (And sometimes the biggest challenge in the Approach phase is focusing and saying, “No.”)

Take, for example, The BOMA Project, an incredible nonprofit that supports small-business creation for women in the poorest parts of the world, starting with northern Kenya. Standing on the shoulders of giants—aka BRAC—BOMA iterated on proven poverty graduation models, calling theirs “REAP,” for Rural Entrepreneur Access Project. By 2016, having developed a proven model, BOMA started to ask itself some of the more common Approach questions: Why do we want to scale? What are we going to scale? How? Where? Their answer was a three-pronged approach: 1. Continue to scale through direct implementation; 2. Scale through other Big, International NGOs (BINGOs); 3. Scale through local governments.

Though not without its challenges, BOMA has definitely scaled up their impact: By the end of 2016 their cumulative impact was ~69,000 people graduated out of extreme poverty via women’s small businesses. Fast-forward five years to the end of 2021, and BOMA’s cumulative impact was over 350,000 individuals, a factor of five. Hockey-stick scale? Not quite. Not yet, but they’re getting there. The BOMA Project’s 2027 impact goal is a factor of ten on their 2021 impact: aiming to lift 3 million women, children, youth, and refugees out of poverty by 2027. Grab your ice skates BOMA squad.

In the social sector, the focal point of a scale strategy is about crafting a robust, well-articulated hypothesis to test. For BOMA, the hypotheses for larger-scale scale looked like the past strategy in some ways: new locations, direct implementation, strategic partnerships, and government adoption. But it looked nothing like the past strategy in other ways: innovating on the core model to test a few key hypotheses:

  1. Condensed REAP: Testing a shortened, 16-month (from 24-month) intervention, aiming to Boost cost efficiencies and the scalability of their graduation model, while still achieving the same, or similar, results.
  2. Youth REAP: BOMA looked at its addressable market—much of sub-Saharan Africa—and saw the growing issue of youth unemployment and decided to see if iterations on its core model could help.
  3. Green REAP: BOMA saw the negative impact of climate-change firsthand on its existing client base and decided to start a climate-smart adaptation of BOMA’s approach, specifically supporting women and youth to create viable green enterprises.

(In addition, BOMA is testing out a nutrition-based REAP model, and another focused on refugees.)

What I love about the BOMA Project is that because they see the need they are focusing on—extreme poverty—growing, they felt now was the time to be more ambitious by testing out new approaches. In the worst-case scenario, new approaches might have a nominal impact—but with tremendous learning—while in the best-case scenario, they would see the organization diversify who they serve and how, leading to all kinds of new opportunities, stories, and of course, large-scale impact.

3. Action

Ironically, if the Approach phase is the most exciting, the Action phase can be the most boring. Buckle up and get ready to sleep if you’re a big-picture entrepreneur: The Action phase is where the rubber meets the road. The focus in the Action phase needs to be building out and testing your Approach, with the focus on real implementation.

But as Benjamin Franklin once said, “failing to prepare is preparing to fail.” At the beginning is “pre-implementation,” the boring, nuanced, incredibly critical part where a venture builds out the key components of the operating model from the previous phase, based on the venture’s hypotheses for scale. Most big-vision CEOs and Founders at this point would rather stick their face in a bucket of hot sauce eyes-wide-open than drill down on infrastructure needs and further develop the three “P” headed snake: Policies, Processes, and Procedures. I hate to say it, but these are the backbones of what makes a venture scale successfully.

A good example is the Meltwater Entrepreneurial School of Technology (MEST). Headquartered in Accra, Ghana, MEST is a pan-African, technology-focused training program, seed fund, and venture incubator and accelerator. Founded in 2008 on the philosophy that talent is universal, but opportunity is not, MEST initially set up an Entrepreneur-In-Training (EIT) program, which provides deep software and entrepreneurship training to Africa’s top talented youth. The end goal of the EIT training program for participants is to launch a software startup, so MEST also created an incubator and seed fund to provide a launchpad for these nascent ventures in the form of seed capital and technical support. (I’ve had the pleasure of working at MEST for the past three-four years, getting the inside-out opportunity to help the organization scale.)

The 2019 EIT class had over 5,000 applicants for 60 spots. (Making it harder to get into than Harvard. Just kidding. But not!) Additionally, MEST was getting demand from ventures across the continent, who were asking if they could pay to join our incubator. The problem was MEST was a “closed circuit”: You needed to get into the EIT program in order to get the opportunity to become accepted into the incubator. At this point, we discussed the need, desire, and opportunity to scale. This was when the quick Assess work took place for MEST, looking at who we are, who we’ve been, what’s going on in our market, and who our target audience is. On the latter, this was critical, and we were clear that we had three broad target audiences: 1. Un-, or under-employed Young Africans; 2. Startup/early-stage ventures; 3. Scale-stage ventures.

From here, in the Approach phase, we were clear that we wanted to scale to reach more young Africans and ventures, we knew we had a lot of assets and expertise in house to leverage as we scaled, we had clarity that we wanted to start ASAP, we knew that we wanted to first test our pilots in MEST’s home country of Ghana, and finally we developed the high-level strategic hypotheses and design on three new pilot programs to test. While the lessons, ideas, and challenges for MEST were based on deep, years-long operations, the initial strategic hypotheses were not overly complicated or massively researched, and yours don’t need to be either. We came up with the initial three new strategic hypotheses in a few hours: Pre-MEST, MEST Express, and MEST Scale:

  1. Pre-MEST would aim to scale key elements of our training program to help MEST scale its impact from reaching ~60 Africans a year, to reaching hundreds of Ghanians annually, en route to 1000s across the continent.
  2. Where our historical incubator would support ~10 startups a year across the continent, MEST Express would aim to accelerate 30-40 startups in Ghana alone each year.
  3. And finally, MEST Scale, in some ways our most ambitious program, aimed to support later-stage, growth-ready Small or Medium Sized enterprises (SMEs) on their scale journey. 

While the initial strategic hypotheses came within a single day, we spent months building these out to test. We decided very early we didn’t want a bloated NGO with 10s of 100s of offices around Africa. As such, for Pre-MEST and MEST Express, we knew a part of our ability to sustainably scale long-term would require our need to work with other Entrepreneur Support Organizations (ESOs) in other parts of the country we didn’t operate in, nor knew the local context. We were after scaled impact, not a scaled organization. In addition to implementing partners in the form of ESOs, we knew we also needed a network of different partner types: Implementing partners, mentors, experts, investors, researchers, and funders. On the latter partner, we found a great-fit partner in the Mastercard Foundation’s Young Africa Works strategy, who provided timely capital, flexibility, and thought-partnership.

When the plans were set, contract was signed, and the excitement levels were more bubbly than a champagne factory, I flew out to Ghana in February 2020 to start the Action phase of the strategy work. We scaled perfectly, and everything went exactly to plan. Thanks for reading!

Just kidding.

The next month, this horrible thing called COVID-19 was declared a global pandemic by the World Health Organization, and just like the rest of the world, everything came to a grinding halt. The wind was knocked out of our sails, but the Mastercard Foundation encouraged us to move quicker to help young people and startups—already facing massive barriers—weather COVID-19. When we launched a few months later, with our cobbled-together programs, we not only had impact, but we exponentially increased our learnings as a program team much faster than if we had stuck to our original timelines and plans. We did it by spending a lot of time building the right infrastructure, including rolling out new systems, creating the necessary policies, processes, and procedures, assessing the risks that we anticipated, and dealing with the ones we didn’t.

The absolute best part of the Action phase was building the team needed to scale. And so the silver lining with COVID-19 was that it allowed us to pull staff from MEST’s training program to our three new programs given our postponing the 2020 EIT program class. These key folks were critical in providing institutional knowledge, leveraging key partnerships, assuring some level of organizational cultural continuity on the new programs team, and catalyzing the much-needed, and still challenging, integration of our new work with our existing work. The new organizational structure reflected MEST’s legacy while accounting for the new expertise and capabilities needed to scale.

After over two years of pilots, with some challenges knocked down, and others still being worked through, the programs have largely exceeded our lofty expectations. We are on track, or have already, exceeded the high-impact targets we set for ourselves three years ago: Roughly 400 young Ghanaians have found dignified and meaningful work, over 60 startups have been accelerated leading to an additional 500+ jobs created, and over a million dollars in additional investment has been raised.

As the ink dried on the chapters of 2021, and 2022’s first pages turned, we spent some time reviewing, reflecting, and thinking about what holds for MEST in 2023 and beyond, given the challenges and successes of our three new programs, coupled with our ambition for larger-scale impact, quicker. Like a veteran climber, MEST is looking for bigger mountains to climb in Africa, seeking to scale our impact across West Africa in the next few years, with a determined eye to the southern and eastern parts of the continent thereafter. As with a good education, the more we’ve scaled, the more we’ve learned, and the more we’ve learned, the more we realize that we don’t know, but want to go figure out.

Read more stories by Greg Coussa.

Fri, 29 Jul 2022 15:44:00 -0500 en-us text/html
Killexams : The Only Disaster Recovery Guide You Will Ever Need

Disaster recovery (DR) refers to the security planning area that aims to protect your organization from the negative effects of significant adverse events. It allows an organization to either maintain or quickly resume its mission-critical functions following a data disaster without incurring significant loses in business operations or revenues.

Disasters come in different shapes and sizes. They do not only refer to catastrophic events such as earthquakes, tornadoes or hurricanes, but also security incidents such as equipment failures, cyber-attacks, or even terrorism.

In preparation, organizations and companies create DR plans detailing processes to follow and actions to take to resume their mission-critical functions.

What is Disaster Recovery?

Disaster recovery focuses on IT systems that help support an organization’s critical business functions. It is often associated with the term business continuity, but the two are not entirely interchangeable. DR is part of business continuity. It focuses more on keeping all business aspects running despite disasters.

Since IT systems have become critical to business success, disaster recovery is now a primary pillar within the business continuity process.

Most business owners do not usually consider that they may be victims of a natural disaster until an unforeseen crisis happens, which ends up costing their company a lot of money in operational and economic losses. These events can be unpredictable, and as a business owner, you cannot risk not having a disaster preparedness plan in place.

What Kind of Disasters Do Businesses Face?

Business disasters can either be technological, natural or human-made. Examples of natural disasters include floods, tornadoes, hurricanes, landslides, earthquakes and tsunamis. In contrast, human-made and technological disasters involve things like hazardous material spills, power or infrastructural failure, chemical and biological weapon threats, nuclear power plant blasts or meltdowns, cyberattacks, acts of terrorism, explosions and civil unrest.

Potential disasters to plan for include:

  • Application failure
  • VM failure
  • Host failure
  • Rack failure
  • Communication failure
  • Data center disaster
  • Building or campus disaster
  • Citywide, regional, national and multinational disasters

Why You Need DR

Regardless of size or industry, when unforeseen events take place, causing daily operations to come to a halt, your company needs to recover quickly to ensure that you continue providing your services to customers and clients.

Downtime is perhaps among the biggest IT expenses that a business faces. Based on 2014-2015 disaster recovery statistics from Infrascale, one hour of downtime can cost small businesses as much as $8,000, mid-size companies $74,000, and large organizations $700,000.

For small and mid-sized businesses (SMBs), extended loss of productivity can lead to the reduction of cash flow through lost orders, late invoicing, missed delivery dates and increased labor costs due to extra hours resulting from downtime recovery efforts.

If you do not anticipate major disruptions to your business and address them appropriately, you risk incurring long-term negative consequences and implications as a result of the occurrence of unexpected disasters.

Having a DR plan in place can save your company from multiple risks, including:

  • Reputation loss
  • Out of budget expenses
  • Data loss
  • Negative impact on your clients and customers

As businesses become more reliant on high availability, their tolerance for downtime has decreased. Therefore, many have a DR in place to prevent adverse disaster effects from affecting their daily operations.

The Essence of DR: Recovery Point and Recovery Time Objectives

The two critical measurements in DR and downtime are:

  • Recovery Point Objective (RPO): This refers to the maximum age of files that your organization must recover from its backup storage to ensure its normal operations resume after a disaster. It determines the minimum backup frequency. For instance, if your organization has a four-hour RPO, its system must back up every four hours.
  • Recovery Time Objective (RTO): This refers to the maximum amount of time your organization requires to recover its files from backup and resume normal operations after a disaster. Therefore, RTO is the maximum downtime amount that your organization can handle. If the RTO is two hours, then your operations can’t be down for a period longer than that.

Once you identify your RPO and RTO, your administrators can use the two measures to choose optimal disaster recovery strategies, procedures and technologies.

To recover operations during tighter RTO windows, your organization needs to position its secondary data optimally to make it easily and quickly accessible. One suitable method used to restore data quickly is recovery-in-place, because it moves all backup data files to a live state, which eliminates the need to move them across a network. It can protect against server and storage system failure.

Before using recovery-in-place, your organization needs to consider three things:

  • Its disk backup appliance performance
  • The time required to move all data from its backup state to a live one
  • Failback

Also, since recovery-in-place can sometimes take up to 15 minutes, replication may be necessary if you want a quicker recovery time. Replication refers to the periodic electronic refreshing or copying of a database from computer server A to server B, which ensures that all users in the network always share the same information level.

Disaster Recovery Plan (DRP)

Try the Veritas Disaster Recovery Planning Guide

A disaster recovery plan refers to a structured, documented approach with instructions put in place to respond to unplanned incidents. It’s a step-by-step plan that consists of the precautions put in place to minimize a disaster’s effects so that your organization can quickly resume its mission-critical functions or continue to operate as usual.

Typically, DRP involves an in-depth analysis of all business processes and continuity needs. What’s more, before generating a detailed plan, your organization should perform a risk analysis (RA) and a business impact analysis (BIA). It should also establish its RTO and RPO.

1. Recovery Strategies

A recovery strategy should begin at the business level, which allows you to determine the most critical applications to run your organization. Recovery strategies define your organization’s plans for responding to incidents, while DRPs describe in detail how you should respond.

When determining a recovery strategy, you should consider issues such as:

  • Budget
  • Resources available such as people and physical facilities
  • Management’s position on risk
  • Technology
  • Data
  • Suppliers
  • Third-party vendors

Management must approve all recovery strategies, which should align with organizational objectives and goals. Once the recovery strategies are developed and approved, you can then translate them into DRPs.

2. Disaster Recovery Planning Steps

The DRP process involves a lot more than simply writing the document. A business impact analysis (BIA) and risk analysis (RA) help determine areas to focus resources in the DRP process.

The BIA is useful in identifying the impacts of disruptive events, which makes it the starting point for risk identification within the DR context. It also helps generate the RTO and RPO.

The risk analysis identifies vulnerabilities and threats that could disrupt the normal operations of processes and systems highlighted in the BIA. The RA also assesses the likelihood of the occurrence of a disruptive event and helps outline its potential severity.

A DR plan checklist has the following steps:

  • Establishing the activity scope
  • Gathering the relevant network infrastructure documents
  • Identifying severe threats and vulnerabilities as well as the organization’s critical assets
  • Reviewing the organization’s history of unplanned incidents and their handling
  • Identifying the current DR strategies
  • Identifying the emergency response team
  • Having the management review and approve the DRP
  • Testing the plan
  • Updating the plan
  • Implementing a DR plan audit

3. Creating a DRP

An organization can start its DRP with a summary of all the vital action steps required and a list of essential contacts, which ensures that crucial information is easily and quickly accessible.

The plan should also define the roles and responsibilities of team members while also outlining the criteria to launch the action plan. It must then specify, in detail, the response and recovery activities. The other essential elements of a DRP template include:

  • Statement of intent
  • The DR policy statement
  • Plan goals
  • Authentication tools such as passwords
  • Geographical risks and factors
  • Tips for dealing with the media
  • Legal and financial information
  • Plan history

4. DRP Scope and Objectives

A DRP can range in scope (i.e., from basic to comprehensive). Some can be upward of 100 pages.

DR budgets can vary significantly and fluctuate over time. Therefore, your organization can take advantage of any free resources available such as online DR plan templates from the Federal Emergency Management Agency. There is also a lot of free information and how-to articles online.

A DRP checklist of goals includes:

  • Identifying critical IT networks and systems
  • Prioritizing the RTO
  • Outlining the steps required to restart, reconfigure or recover systems and networks

The plan should, at the very least, minimize any adverse effects on daily business operations. Your employees should also know the necessary emergency steps to follow in the event of unforeseen incidents.

Distance, though important, is often overlooked during the DRP process. A DR site located close to the primary data centre is ideal in terms of convenience, cost, testing and bandwidth. However, since outages differ in scope, a severe regional event may destroy both the primary data centre and its DR site when the two are located close together.

5. Types of Disaster Recovery Plans

You can tailor a DRP for a given environment.

  • Virtualized DRP: Virtualization allows you to implement DR in an efficient and straightforward way. Using a virtualized environment, you can create new virtual machine (VM) instances immediately and provide high availability application recovery. What’s more, it makes testing easier to achieve. Your plan must include validation ability to ensure that applications can run faster in DR mode and return to normal operations within the RTO and RPO.
  • Network DRP: Coming up with a plan to recover a network gets complicated with the increase in network complexity. Ergo, it is essential to detail the recovery procedure step-by-step, test it correctly, and keep it updated. Under a network DRP, data is specific to the network; for instance, in its performance and networking staff.
  • Cloud DRP: A cloud-based DR can range from file backup to a complete replication process. Cloud DRP is time-, space- and cost-efficient; however, maintaining it requires skill and proper management. Your IT manager must know the location of both the physical and virtual servers. Also, the plan must address security issues related to the cloud.
  • Data Center DRP: This plan focuses on your data center facility and its infrastructure. One key element in this DRP is an operational risk assessment since it analyzes the key components required, such as building location, security, office space, power systems and protection. It must also address a broader range of possible scenarios.

Disaster Recovery Testing

Testing substantiates all DRPs. It identifies deficiencies in the plan and provides opportunities to fix any problems before a disaster occurs. Testing can also offer proof of the plan’s effectiveness and hits RPOs.

IT technologies and systems are continually changing. Therefore, testing ensures that your DRP is up to date.

Some reasons for not testing DRPs include budget restrictions, lack of management approval, or resource constraints. DR testing also takes time, planning and resources. It can also be an incident risk if it involves the use of live data. However, testing is an essential part of DR planning that you should never ignore.

DR testing ranges from simple to complex:

  • A plan review involves a detailed discussion of the DRP and looks for any missing elements and inconsistencies.
  • A tabletop test sees participants walk through the plan’s activities step by step. It demonstrates whether DR team members know their duties during an emergency.
  • A simulation test is a full-scale test that uses resources such as backup systems and recovery sites without an real failover.
  • Running in disaster mode for a period is another method of testing your systems. For instance, you could failover to your recovery site and let your systems run from there for a week before failing back.

Your organization should schedule testing in its DR policy; however, be wary of its intrusiveness. This is because testing too frequently is counter-productive and draining on your personnel. On the other hand, testing less regularly is also risky. Additionally, always test your DR plan after making any significant system changes.

To get the most out of testing:

  • Secure management approval and funding
  • Provide detailed test information to all parties concerned
  • Ensure that the test team is available on the test date
  • Schedule your test correctly to ensure that it doesn’t conflict with other activities or tests
  • Confirm that test scripts are correct
  • Verify that your test environment is ready
  • Schedule a dry run first
  • Be prepared to stop the test if needed
  • Have a scribe take notes
  • Complete an after-action report detailing what worked and what failed
  • Use the results gathered to update your DR plan

Disaster Recovery-as-a-Service (DRaaS)

Disaster recovery-as-a-service is a cloud-based DR method that has gained popularity over the years. This is because DRaaS lowers cost, it is easier to deploy, and allows regular testing.

Cloud testing solutions save your company money because they run on shared infrastructure. They are also quite flexible, allowing you to sign up for only the services you need, and you can complete your DR tests by only spinning up temporary instances.

DRaaS expectations and requirements are documented and contained in a service-level agreement (SLA). The third-party vendor then provides failover to their cloud computing environment, either on a pay-per-use basis or through a contract.

However, cloud-based DR may not be available after large-scale disasters since the DR site may not have enough room to run every user’s applications. Also, since cloud DR increases bandwidth needs, the addition of complex systems could degrade the entire network’s performance.

Perhaps the biggest disadvantage of the cloud DR is that you have little control over the process; thus, you must trust your service provider to implement the DRP in the event of an incident while meeting the defined recovery point and recovery time objectives.

Costs vary widely among vendors and can add up quickly if the vendor charges based on storage consumption or network bandwidth. Therefore, before selecting a provider, you need to conduct a thorough internal assessment to determine your DR needs.

Some questions to ask potential providers include:

  • How will your DRaaS work based on our existing infrastructure?
  • How will it integrate with our existing DR and backup platforms?
  • How do users access internal applications?
  • What happens if you cannot provide a DR service we need?
  • How long can we run in your data centre after a disaster?
  • What are your failback procedures?
  • What is your testing process?
  • Do you support scalability?
  • How do you charge for your DR service?

Disaster Recovery Sites

A DR site allows you to recover and restore your technology infrastructure and operations when your primary data center is unavailable. These sites can be internal or external.

As an organization, you are responsible for setting up and maintaining an internal DR site. These sites are necessary for companies with aggressive RTOs and large information requirements. Some considerations to make when building your internal recovery site are hardware configuration, power maintenance, support equipment, layout design, heating and cooling, location and staff.

Though much more expensive compared to an external site, an internal DR site allows you to control all aspects of the DR process.

External sites are owned and operated by third-party vendors. They can either be:

  • Hot: It's a fully functional data center complete with hardware and software, round-the-clock staff, as well as personnel and customer data.
  • Warm: It’s an equipped data center with no customer data. Clients can install additional equipment or introduce customer data.
  • Cold: It has the infrastructure in place to support data and IT systems. However, it has no technology until client organizations activate DR plans and install equipment. Sometimes, it supplements warm and hot sites during long-term disasters.

Disaster Recovery Tiers

During the 1980s, two entities, the SHARE Technical Steering Committee and International Business Machines (IBM) came up with a tier system for describing DR Service levels. The system showed off-site recoverability with tier 0 representing the least amount and tier 6 the most.

A seventh tier was later added to include DR automation. Today, it represents the highest availability level in DR scenarios. Generally, as the ability to recover improves with each tier, so does the cost.

The Bottom Line

Preparation for a disaster is not easy. It requires a comprehensive approach that takes everything into account and encompasses software, hardware, networking equipment, connectivity, power, and testing that ensures disaster recovery is achievable within RPO and RTO targets. Although implementing a thorough and actionable DR plan is no easy task, its potential benefits are significant.

Everyone in your company must be aware of any disaster recovery plan put in place, and during implementation, effective communication is essential. It is imperative that you not only develop a DR plan but also test it, train your personnel, document everything correctly, and Boost it regularly. Finally, be careful when hiring the services of any third-party vendor.

Need an enterprise-level disaster recovery plan for your organization? Veritas can help. Contact us now to receive a call from one of our representatives.

The Veritas portfolio provides all the tools you need for a resilient enterprise. From daily micro disasters to a “black swan” event, Veritas covers at scale. Learn more about Data Resiliency.

Fri, 28 Feb 2020 15:12:00 -0600 en-US text/html
Killexams : Campus IT Moves to SDN Technology for Speed & Visibility

Officials at Ohio’s Bowling Green State University had clear goals when they began re-engineering their technology environment three years ago. 

First, downsize an onsite data center that needed a tech refresh and took up too much valuable real estate in the heart of the campus. The solution for that goal was a colocation data center managed by an outside company. Second, BGSU wanted to power the new facility with next-generation networking to support fast and secure communications today and to facilitate the long-term goal of creating a cloud-centric university. 

The BGSU staff chose ­software-defined networking, enabled by the Cisco Application Centric Infrastructure (ACI), as the cornerstone of the network modernization effort. 

“SDN gives us an opportunity to capitalize on and implement today’s networking innovations in the best way possible,” says Jared Baber, BGSU’s network manager. “It also provides a foundation for new orchestration and automation capabilities that we can add in the future to reduce the time we need to deliver new services.”

BGSU isn’t the only higher education institution gravitating to SDN as IT managers scope out next-generation networks and the changing makeup of their data centers. 

“Forward-looking university CIOs recognize that more workloads are moving to the cloud,” says Bob Laliberte, senior analyst at the Enterprise Strategy Group. “SDN enables them to architect for that future at a fundamental level.” 

SDN is appealing because it addresses three areas where improvements are always welcome: delivering services, responding to new capacity demands and shoring up security. Many SDN adopters, including BGSU, are also looking to expand their software-defined strategy to something even more transformational: software-defined data center technology, which brings greater automation and efficiency not only to networks, but also to computing, storage and other core capabilities.

Alongside their benefits, SDN and SDDC represent fundamentally new ways to manage and secure IT resources. CIOs need a well-defined strategy to plan for these modernizations and achieve their full potential. 

BGSU and other early adopters are pointing the way. 

SIGN UP: Get more news from the EdTech newsletter in your inbox every two weeks!

Software-Defined Networks Deliver Four Key IT Benefits

SDN uses a central, software-based controller to manage the physical switches and routers running on a campus network. Administrators use the controller to configure hardware, push out software updates and apply security policies without having to manually touch each device, a tedious and time-consuming task in large, far-flung networks. SDN relieves strain on network administrators, but it also gives IT shops four other big benefits, according to Laliberte.

The first is IT agility. SDN lets IT spin up network connections on demand, as faculty, staff and researchers request additional servers and storage units. SDN also strengthens security. With a centralized SDN controller, admins can apply the latest policies uniformly across all connected devices, a task that is cumbersome and prone to gaps with manual processes. 


Together, these types of efficiencies help to cut costs. Centralized software can reduce operating expenses by enabling a relatively small number of staff members to configure and manage larger numbers of networking devices. 

Finally, SDN is a logical path to the cloud. Public cloud providers, including Google and Microsoft, rely heavily on SDN for flexible provisioning of serv­ices within their data centers. With similar technology, colleges can more easily move workloads around hybrid cloud environments. 

SDN isn’t limited to LANs, Laliberte says. CIOs are increasingly adopting a similar approach for WANs. “SD-WANs allow IT staffs to aggregate their network connections to save money on bandwidth costs and networking equipment in branches, such as satellite campuses,” he says.

IT Gains Faster Deployment and Better Visibility

BGSU chose Cisco ACI after evaluating a variety of commercial platforms and open-source options, in part because ACI supported different hypervisors. In addition to purchasing ACI, the staff added a range of new networking hardware to the colocation data center, including Cisco’s Nexus Ethernet and MDS Fibre Channel switches. Rounding out the deployment were Cisco UCS server appliances, Citrix NetScaler network load balancers and HPE 3Par network storage units. 

Together, SDN and other networking upgrades allow IT staff to deliver serv­ices more efficiently. Previously, when administrators needed to spin up new virtual servers, they might have needed to change VLANs for the specific physical chassis they were using. 

“There had to be a lot of coordination between networking and the systems team,” Baber says. “Now, systems administrators have the tools they need for creating endpoint groups and setting firewall rules without waiting for help from the networking team or the security teams, as in the past. This is significantly reducing the time it takes to deploy new services.”

Another plus is better network visibility, which means teams can address performance issues more quickly. For example, a network with a specific access control list (ACL) may support servers that are secured with a host-based firewall. 

“In the past, when there was a connectivity issue, the server team would be able to see the host-based firewall and the networking team would see the ACL, but neither would see the whole picture,” Baber says. “With SDN, it’s a single pane of glass providing an overall view of the environment for everyone. This means we can more effectively troubleshoot problems.”

Closer collaboration is having beneficial ripple effects across the department, Baber says: “SDN has been a catalyst for transforming our IT culture in a positive way.” 


At BGSU, software-defined networking increases agility in responding to requests from faculty and staff. Photo: Angelo Merendino

Next-Gen Networks Yield Consistent Performance for Users

Like other networks in higher education, the one supporting New York’s Marist College must perform flawlessly day and night, even when there’s an unexpected traffic spike. This hit home during the 2016 presidential election, when candidate Bernie Sanders visited the campus and 20,000 wireless devices suddenly logged on to a single network segment. Fortunately, the college had a solid SDN implementation in place to help IT maintain high performance rates in the core network

“The SDN console showed us exactly where the traffic was coming from and how best to reroute it,” says William Thirsk, vice president and CIO. “We created a mesh network where there was the higher concentration of devices, which shuttled traffic away from our core business, and then took the temporary segment down when the event was over.”

Before launching SDN for production workloads, Marist’s research faculty and IT staff investigated the technology in the college’s Network Interoperability Lab. Here, students and vendors can test various platforms and flesh out open-source specifications for software-defined controllers and other tools. 

Encouraged by the SDN capabilities evidenced in the lab, IT staff rolled out OpenDaylight, an open-source SDN platform. Students, faculty and staff also created an SDN-based management module using Avalanche, another open-source technology, which lets networking teams optimize the college’s mix of networking gear. 

“The brands of the individual networking equipment don’t matter when we view the network from the SDN console,” Thirsk says. “We deal with traffic flow, bandwidth and routing considerations rather than wrestling with the equipment.” 

The console also helps staff defend against distributed denial of service attacks and other cyberthreats. When a danger arises, admins quickly take down the affected network segments to protect servers, software and data. Similarly, technicians can establish special authentication requirements for servers running sensitive information. 

“Only when someone is successfully authenticated will a VPN to those servers be launched,” says Thirsk. “Those network connections are specifically provisioned for those individual users, and when they log out, the span goes away.” 

For end users, the biggest impact of Marist’s next-gen networking is its consistent performance. “What they want is a very fast network connection wherever they go on campus,” Thirsk says. “If we aren’t hearing anything, that means everything is working fine.”

Mon, 23 Apr 2018 07:42:00 -0500 Alan Joch en text/html
Killexams : How can we reduce the environmental impact of wound management?

The National Wound Care Strategy Programme’s recommendations aim to Boost sustainability and reduce carbon emissions


This article explores the anticipated impact on sustainability of implementing the National Wound Care Strategy Programme’s recommendations for lower-limb wounds. Early learning suggests that it leads to faster healing, thereby reducing product use and travel associated with the delivery of care. The use of reusable compression systems for the large number of patients with leg ulceration caused by venous insufficiency may also reduce waste.

Citation: Morton N et al (2022) How can we reduce the environmental impact of wound management? Nursing Times [online]; 118: 8.

Authors: Nicky Morton is lead for supply and distribution; Una Adderley is programme director; Rachael Lee is clinical implementation manager; all at the National Wound Care Strategy Programme.

  • This article has been double-blind peer reviewed
  • Scroll down to read the article or download a print-friendly PDF here (if the PDF fails to fully download please try again using a different browser)


The climate crisis is not new, but we have now reached the point where everyone needs to examine their impact on the environment, both at home and at work. We each need to determine what we can do to reduce our individual and collective carbon emissions. For nurses and other health professionals, this includes looking at the carbon footprint of:

  • The care we deliver;
  • Our transport choices;
  • Our product choices;
  • The waste we generate.

This article focuses on the work of the National Wound Care Strategy Programme (NWCSP) and how its recommendations may affect environmental sustainability.

The NHS’s carbon footprint

Founded by the United Nations, the Intergovernmental Panel on Climate Change (2022) published a report warning that the window of opportunity to save the planet is quickly closing and that the rise in temperature is now having a direct impact on all living things. It describes the evidence as “unequivocal”, stating that “climate change is a threat to human wellbeing and planetary health”. Climate disasters – such as uncontrollable forest fires, flooding and drought – are becoming more frequent and having a dramatic impact on people, wildlife and the environment.

Currently, the NHS produces 5.4% of all UK carbon emissions (Health Care Without Harm and Arup, 2019). It is estimated that 60% of the NHS’s carbon emissions relate to the use of equipment and consumables. Nurses, as the largest users of these items in the delivery of care, have an essential role to play in changing this (Gallagher and Dix, 2020). The impact of wound products on green issues has been raised as an issue of concern with the NWCSP by many nurses who want to know how they can contribute to reducing carbon emissions and improving sustainability.

The term ‘net zero’ refers to the ambition to ensure the amount of manmade greenhouse gases (such as carbon dioxide) going into the atmosphere is not greater than the amount being removed. Achieving net zero would halt global warming (Net Zero Climate, 2022).

The NHS aspires to be the world’s first net-zero health service; its Standard Contract now requires all NHS organisations to have both a sustainability lead and a green plan (NHS England and NHS Improvement, 2020). The NHSE and NHSI (2020) report set two targets:

  • To reduce the emissions the NHS controls directly (its carbon footprint) to net zero by 2040, following an 80% reduction between 2028 and 2032;
  • To reduce the emissions the NHS can influence (its carbon footprint plus) to net zero by 2045, following an 80% reduction between 2036 and 2039.

Alongside these sustainability aims, the NHS recovery plan (NHSE and NHSI, 2022) requires community service providers to free up capacity to help clear the elective care backlog caused by the Covid-19 pandemic. This should be achieved by avoiding admission to, and supporting early discharge from, acute care (NHSE and NHSI, 2022).

NHS wound care

Large numbers of people receive wound care from the NHS. Estimates made before the pandemic suggested that there were 3.8 million NHS patients with a wound and that this number was increasing (Guest et al, 2020). Wound care uses a large volume of wound-management products and, because it is mostly delivered in the community, incurs considerable travel (Guest et al, 2020).

The NHS’s net-zero aims require each organisation to tackle sustainability (NHSE and NHSI, 2022). When thinking about sustainability in relation to wound care, people often focus on whether a dressing is made of natural or synthetic fibres and how much packaging is used. Although these are important considerations, the issue is much wider. For example, if a wound is healed more quickly, fewer products are used and less travel is required between clinicians and the patient. Improving wound care would, therefore, not only contribute to the NHS’s net-zero objectives but also support its recovery plan (NHSE and NHSI, 2022).

Addressing sustainability through care pathways

The NWCSP has been commissioned by NHSE and NHSI to Boost care for people in England who have lower-limb (leg or foot) wounds, pressure ulcers or surgical wounds. It aims to:

  • Reduce patient suffering;
  • Improve healing rates;
  • Prevent wounds occurring and recurring;
  • Use clinical time and other health and care resources in the most effective way.

The NWCSP’s work is structured around three areas:

  • People – providing education to develop the knowledge and skills of the health and care workforce, patients and carers;
  • Process – developing clinical pathways that support the delivery of evidence-based care;
  • Technology – adopting digital technology to Boost data and information.

The NWCSP has identified that the main reasons for the large number of people living with wounds are poor healing rates and high levels of recurrence. This is due to unwarranted variation in care, and too little focus on preventing recurrence (NWCSP, 2020b). Too few patients receive care in line with best practice, such as:

  • Early accurate diagnosis of the cause of their leg or foot ulceration;
  • Evidence-based care, such as strong compression therapy and endovenous ablation for venous leg ulceration or early revascularisation for peripheral arterial disease.

Patients who do receive evidence-based care often receive it too late (NWCSP, 2020b). The most effective way of addressing sustainability in wound care is, therefore, less about using environmentally-friendly dressings and more about improving care pathways. People with wounds need to be able to quickly access care from clinicians who can diagnose and treat them. This will Boost healing rates and decrease recurrence rates, which will reduce:

  • The volume of wound-care products, plastic packaging and clinical waste (infected dressing waste) used;
  • The number of visits required, thereby reducing carbon emissions from travel.

Alongside these major contributions to sustainability, better healing rates will Boost the quality of life for people with wounds and Boost NHS workforce productivity, freeing up valuable time that can be used for care.

The NWCSP (2020a) has developed a clinical pathway for lower-limb ulcers, which is based on relevant National Institute for Health and Care Excellence (NICE) and Scottish Intercollegiate Guidelines Network clinical guidelines. The NWCSP (2020b) has also produced a business case, which outlines the predicted costs and benefits of implementing its clinical pathway. The business case predicts that implementing the recommendations would Boost average healing rates for venous leg ulceration from 32% to 74% at 12 months, as well as halving recurrence rates. This would:

  • Release community nursing time spent on wound care by 23%;
  • Reduce rates of non-elective hospital admissions;
  • Reduce avoidable travel by both patients and staff;
  • Reduce the amount spent on wound-care products by 11-23% (NWCSP, 2020b).

The NWCSP is now confirming the assumptions of its business case in partnership with seven first-tranche implementation sites, one in each NHS region in England. Early results are very encouraging, suggesting the business case’s predictions are achievable and may even underestimate the potential benefits:

Since implementing and embedding the NWCSP lower-limb recommendations through the use of clinical pathways, we have been able to demonstrate system-wide benefits. Using a wound-management digital system has enabled us to begin to measure our healing rates and infection rates and demonstrate real improvements.” (Lucy Woodhouse, tissue viability and lower limb lead clinical nurse specialist, Wye Valley NHS Trust)

Addressing sustainability through product choice

Leg and foot ulcers are the most common types of wounds, accounting for around 37% of all wounds and 71% of total NHS spend on wound care (Guest et al, 2020). Most leg ulceration is caused by venous insufficiency, which should be treated with compression therapy in the form of bandaging and hosiery to promote healing and reduce recurrence (Shi et al, 2021).

New technologies have been developed that provide different compression bandaging systems, compression hosiery kits and compression wraps. For the treatment of venous leg ulcers, there is robust evidence to suggest that two-layer compression hosiery kits and four-layer compression bandaging are equally effective in promoting healing. However, ulcers are less likely to recur following treatment with two-layer compression hosiery kits, which are more likely to be cost-effective (Ashby et al, 2014).

Two-layer compression hosiery kits offer many potential advantages for sustainability. Increased use of reusable compression hosiery kits could reduce the number of used bandages entering landfill via the offensive waste stream. Compression hosiery also offers greater opportunities for patients to engage in supported self-management (or with help from their carers). As a result, travel is reduced between clinicians and patients.

However, it is important to note that the environmental impact would not be eliminated by changing from disposable bandaging to reusable compression hosiery kits. There would be an environmental cost of washing the garments, due to the need for water (and fuel to heat it) and detergent. To reduce the carbon footprint of using reusable compression garments, it is important to encourage patients to:

  • Wash them at a low temperature;
  • Use non-biological detergent;
  • Air-dry them – this is also recommended to increase their lifespan.

Early results from the NWCSP’s first-tranche implementation sites suggest such product changes are being made:

Our product data is showing an increase in reusable compression hosiery systems, enabling more patients to have access to supported self-management, with the support being provided by our lower limb specialist nurses. We are also seeing a reduction in our antimicrobial dressing use and overall wound-care product use through implementing the NWCSP evidence-based recommendations.” (Lucy Woodhouse, tissue viability and lower limb lead clinical nurse. specialist, Wye Valley NHS Trust)

Two-layer hosiery kits are not suitable for patients with large, heavily exuding ulcers; the VenUS 6 trial identified many patients who could not tolerate hosiery due to the extent of their ulceration in terms of size, leakage and pain (Ashby et al, 2014). The volume of two-layer hosiery kits being used is still much lower than that of compression bandages (Dumville, 2020) but, in line with the NWCSP recommendations, earlier diagnosis and treatment should increase the number of patients who can successfully use two-layer compression hosiery kits as a treatment option.

A compression option that was more recently introduced is compression wraps (adjustable fastened or wrapped compression systems), which are being marketed as an alternative to compression bandaging and compression hosiery. Although these systems have been used for many years in lymphoedema management, it is not yet known whether they are an effective treatment for healing venous leg ulcers. The clinical- and cost-effectiveness of compression wraps is currently being evaluated in a trial that compares them with four- and two-layer bandaging (Dumville, 2020). If they are found to be an effective intervention, they will become a further treatment with sustainability benefits.

Moving forward

Nurses and other health and care professionals involved in wound management are in a pivotal position to contribute to an environmentally sustainable healthcare system. The key priority must be to reduce the number of people who have leg and foot ulcers; this is in line with NHS initiatives that aim to prevent pressure ulcers and diabetic foot ulcers (NICE, 2014; NICE, 2015). In addition, it is important to consider the life cycle of wound-care products and appliances. They should all be assessed in terms of their:

  • Materials (and methods of extraction);
  • Manufacturing and packaging;
  • Transportation and distribution;
  • Use;
  • Disposal.

In September 2021, the NHS approved a roadmap to support suppliers to align with its net-zero ambition. This will help reduce the carbon footprint of both suppliers and individual products by 2030 (NHS England, 2021). The NHS also aims to reduce the use of single-use plastics and to increase the reuse and recycling of medical devices in all areas by decarbonising the supply chain (NHS and NHSI, 2020). Carbon footprint data is not yet available for most products, but people responsible for product choice should consider sustainability in addition to efficacy. This includes issues such as the country of manufacture, product content and packaging.

Current nursing students are more likely to agree that the climate crisis and sustainability are important syllabus in nursing than previous student nurses (Richardson et al, 2021). It is hoped that this generation of student clinicians will challenge current practice by identifying areas that are not aligned to the NHS’s net-zero aspirations, as well as spotting small changes that could make a big difference.

Since committing to achieving carbon net zero, the NHS has reduced its emissions; there is still a long way to go, but “together we can achieve even more” (NHS England, 2022). Addressing sustainability in wound care needs to focus on improving pathways of care to achieve faster healing and reduce recurrence, thereby reducing product use and travel. Implementing the NWCSP’s lower-limb recommendations, therefore, offers a real opportunity to contribute to the NHS’s sustainability agenda and achieving its net-zero goal.

Key points

  • The NHS aims to reduce its carbon footprint to net zero by 2040
  • Wound care uses a large volume of products and incurs considerable travel
  • The National Wound Care Strategy Programme’s recommendations aim to Boost healing and prevent recurrence
  • Applying these recommendations to clinical pathways aims to reduce carbon emissions
  • Using reusable products and assessing their life cycle will also Boost sustainability

Ashby R et al (2014) VenUS IV (venous leg ulcer study IV) – compression hosiery versus compression bandaging in the treatment of venous leg ulcers: a randomised controlled trial, mixed treatment comparison and decision analytic model. Health Technology Assessment; 18: 57, 1-293.

Dumville J (2020) VenUS 6: a Randomised Controlled Trial of Compression Therapies for the Treatment of Venous Leg Ulcers. The University of Manchester and Manchester University NHS Foundation Trust.

Gallagher R, Dix A (2020) Sustainability 1: can nurses reduce the environmental impact of healthcare? Nursing Times; 116: 9, 29-31.

Guest JF et al (2020) Cohort study evaluating the burden of wounds to the UK’s National Health Service in 2017/2018: update from 2012/2013. BMJ Open; 2020: 10, e045253.

Health Care Without Harm, Arup (2019) Health Care’s Climate Footprint. HCWH, Arup.

Intergovernmental Panel on Climate Change (2022) Climate Change 2022: Impacts, Adaptation and Vulnerability – Summary for Policymakers. IPCC.

National Institute for Health and Care Excellence (2015) Diabetic foot problems: prevention and management., 26 August (accessed 4 July 2022).

National Institute for Health and Care Excellence (2014) Pressure ulcers: prevention and management., 23 April (accessed 4 July 2022).

Net Zero Climate (2022) What is net zero? (accessed 4 July 2022).

NHS England (2022) Greener NHS. NHS England (accessed 13 June 2022).

NHS England (2021) Greener NHS. NHS England (accessed 4 July 2022).

NHS England and NHS Improvement (2022) Delivery Plan for Tackling the COVID-19 Backlog of Elective Care. NHSE and NHSI.

NHS England and NHS Improvement (2020) Delivering a ‘Net Zero’ National Health Service. NHSE and NHSI.

National Wound Care Strategy Programme (2020a) Recommendations for Lower Limb Ulcers. NWCSP.

National Wound Care Strategy Programme (2020b) Preventing and Improving Care of Chronic Lower Limb Wounds: Implementation Case. NWCSP.

Richardson J et al (2021) The Greta Thunberg effect: student nurses’ attitudes to the climate crisis. Nursing Times; 117: 5, 44-47.

Shi C et al (2021) Compression bandages or stockings versus no compression for treating venous leg ulcers. Cochrane Database of Systematic Reviews; 2021: 7.

Mon, 01 Aug 2022 01:18:03 -0500 NT Contributor en-GB text/html
Killexams : What Does the Supreme Court EPA Ruling Mean for New York Climate Efforts?

Gov. Hochul cuts a solar farm ribbon (photo: Philip Kamrass/ New York Power Authority)

Elected and appointed officials, candidates for office, and environmental activists in New York  were reenergized to fight against climate change locally in the wake of a June Supreme Court ruling that limits the power of the federal Environmental Protection Agency.

The decision in West Virginia v. Environmental Protection Agency released June 30 impacts the EPA’s power to regulate carbon emissions, holding that the EPA needs congressional approval to take such action. Many legal experts expect that the Court’s argument in the ruling could be extended to other federal agencies, hindering their regulatory authority to act without specific permission from Congress.

For the first time, the Court explicitly invoked the “major questions doctrine,” a legal principle requiring Congress to provide clear authorization before a regulation is imposed. The decision held that the EPA exceeded its jurisdiction when it created a rule that shifted power plants to renewable energy under the Clean Air Act. In practice, this signals that the “major questions doctrine” could be applied by the conservative majority on the Supreme Court to other EPA regulations and other federal agencies, potentially undoing regulations of major significance where Congress has not explicitly legislated regulatory authority.

The EPA ruling, while not unexpected, sent additional shockwaves across the country after the conservative Supreme Court majority’s rulings overturning Roe v. Wade federal abortion rights and New York’s concealed carry gun regulations.

In some ways, the EPA ruling flew under the radar given how much attention has been on Roe especially. But many in New York took notice and highlighted efforts underway or in the offing to take additional measures to fight or adapt to climate change.

“By limiting the Environmental Protection Agency's ability to protect our environment, the Court is taking away some of the best tools we have to address the climate crisis,” Governor Kathy Hochul said in a statement. “New York is once again in the familiar, but unwelcome, position of stepping up after the Supreme Court strikes a blow to our basic protections. But as always, New York is ready. We will strengthen our nation-leading efforts to address the climate crisis, redouble efforts with sister states, build new clean energy projects in every corner of the state, and crack down on pollution harming the health of many New Yorkers. After this ruling, our work is more urgent than ever."

Implications for New York
State Department of Environmental Conservation Commissioner Basil Seggos told Gotham Gazette that while the Supreme Court’s decision is “devastating,” the “good news is that New York is ready.” 

The ruling reinforces the important role that the states have in addressing climate change, said Seggos, who oversees New York State’s environmental protection and regulatory agency.

New York’s Climate Leadership and Community Protection Act, signed into law in July 2019 by then-Governor Andrew Cuomo, is commonly touted as the most aggressive and comprehensive climate legislation in the nation. Among other requirements, the law mandates that New York reduce its greenhouse gas emissions by at least 85% by 2050 from 1990 levels and creates the Climate Action Council to create the full plan to make it happen.

A group of 22 members, the Council is in the process of developing the Scoping Plan to reach the goals set in the CLCPA.

“Ultimately, the Supreme Court decision doesn’t impact us from a regulatory perspective. We have our own state laws that are not subject to their decision-making,” said Seggos, who serves as Co-Chair of the CLCPA’s Climate Action Council. “I think what we now see is a heightening degree of responsibility that the states will have to ensure that the country is addressing the climate crisis.”

The decision also increases the importance of the state Department of Environmental Conservation working in coordination with the EPA, Seggos said.

“Our relationship with the EPA has never been stronger,” he said, referencing outreach, communication, and accessible EPA leadership. “This ruling underscores just how important that relationship is and that we are going to continue to be a committed partner with the federal government to find creative ways to help them achieve their goals and, of course, seek help from them for reaching our goals.”

The Supreme Court EPA ruling increases New York’s obligation to respond to climate change, said City Council Member James Gennaro, a Queens Democrat who chairs the Council’s Committee on Environmental Protection.

“This ruling will inevitably place more pressure on New York City and other local governments.

New York City in particular, being very climate conscious, will be left to fill the gaps,” said Gennaro. “But while New York City has been a global leader in climate efforts, there is very little motivating other states to follow suit. And that is the biggest problem we face from this ruling.”

With fewer federal standards, companies may look to move their operations out of New York and into states with weaker environmental regulations, according to Peter Iwanowicz, executive director of Environmental Advocates New York, a nonprofit advocacy group.

“If we have national standards that are level with New York, then everybody’s in the same playing field,” said Iwanowicz, who serves on the CLCPA’s Climate Action Council. “Leveling the playing field is a really important part of pollution reduction initiatives.” 

Still, Seggos assured that the EPA continues to have an “extraordinary amount of authority” with regard to its regulatory abilities.

In spite of the Supreme Court ruling, the EPA still has powerful tools to reduce greenhouse gas emissions,” wrote Seggos and his Climate Action Council co-chair Doreen Harris, who is the CEO of the New York State Energy Research & Development Authority (NYSERDA) in a recent lohud op-ed. “Chief among them is its authority to require every coal- and gas-fired power plant to utilize the best advanced technologies available to meet stringent emission limits or cease operation. The EPA should follow the lead of California, New York, and other states and adopt aggressive vehicle emission standards that drive vehicle manufacturers to produce and sell the zero-emission cars, trucks, and buses using well-established electric and fuel cell technology.”

Seggos told Gotham Gazette that the main issue he sees with the Supreme Court EPA ruling is that the “trend is worrying,” referring to setbacks on climate change, though the interview took place before news broke of a deal reached between U.S. Senate Majority Leader Chuck Schumer of New York and West Virginia Senator Joe Manchin on potentially massive climate action in a federal budget bill being further negotiated in Washington.

Still, Seggos’ concern remains, as he put it, that while the EPA still has authority and there is still a strong relationship between current federal and New York leadership, the Supreme Court ruling “setbacks a critical initiative, which is the regulation of power plants from a national perspective and pushing ultimately the power sector into renewables."

Others expressed concern that the generalities in the ruling will impact the EPA’s jurisdiction further and have more far-reaching implications.

“This is a major setback in our fight against climate change,” Gennaro said. “This ruling curtails the EPA’s authority and poses ‘major questions’ about the EPA’s authority to regulate emissions should this decision be applied more broadly.”

Eddie Bautista, Executive Director of the New York City Environmental Justice Alliance, said he fears the potential for the Supreme Court to further implement regulatory restrictions in the future, especially because many New York state programs have been enacted in order to show compliance with federal regulatory and legal mandates.

“The wording of the decision is so troubling,” Bautista said. “It’s not impossible to see a right-wing attempt to use that ruling to try to go after New York laws.”

Since the Supreme Court decision, Bautista said he has been careful in his approach to his advocacy, determined not to invite “similar attacks” in New York.

“We can’t adopt a more cautious approach especially since the federal government’s hands have been tied,” he said. “But we also have to be mindful…If they’ve come after the federal government’s ability to protect human health and the environment, what will it look like when they come after us at the city and state level?”

Legislation in Place
“New York State has the ability to always act more aggressively than federal environmental standards,” Iwanowicz said. “Nothing that the Supreme Court decided under West Virginia v. EPA will impact New York State’s efforts to zero out our greenhouse gas emissions.”

While experts like Iwanowicz contend that New York’s ability to enforce the landmark Climate Leadership and Community Protection Act of 2019 (CLCPA) is not currently threatened in light of the ruling, some like Bautista are panic about future suits targeting the CLCPA and many advocates and elected officials insist the state should be doing more.

The CLCPA is absolutely not enough,” said Pete Sikora, senior advisor for nonprofit advocacy group New York Communities for Change, where he leads climate organizing. “It is too weak and has no enforcement mechanism. The very limited positive steps the state has taken are wholly insufficient. They must do the big, hard stuff. Fast. They need to break the pattern of making big promises to act, but not following through.”

Sikora is referring to the lack of a full plan from the Climate Action Council to fund, implement, and enforce the CLCPA, as well as related budgetary, legislative, and regulatory steps. There have been other climate- and environment-related bills that have become law in recent years but also a number of major proposals that have not made their way through Albany. The former group includes bills recently signed by Governor Hochul such as The Advanced Building Codes, Appliance and Equipment Efficiency Standards Act of 2022 and The Utility Thermal Energy Network and Jobs Act; while the latter group of unpassed climate bills includes the Build Public Renewables Act and a proposal to ban gas hook-ups in newly-constructed buildings. 

Following the Supreme Court decision, Hochul moved swiftly to sign that legislative package of three climate-related bills, targeting energy efficiency and lowering greenhouse gas emissions. The legislation supports the goals set in the CLCPA.

"Now more than ever, the importance of a cost-effective green transformation is clear, and strengthening building codes and appliance standards will reduce carbon emissions and save New Yorkers billions of dollars by increasing efficiency,” Hochul said in a statement. “This multi-pronged legislative package will not only replace dirty fossil fuel infrastructure, but it will also further cement New York as the national leader in climate action and green jobs."   

The United States Climate Alliance, created in response to former President Donald Trump’s withdrawal from the Paris Climate Agreement, is a coalition of 24 states including New York pledging to uphold the goals set forth in the original agreement. The member states, representing 59% of the U.S. economy and 54% of the U.S. population, according to the Alliance, pledge to achieve overall net-zero greenhouse gas emissions no later than 2050. 

“New York has an opportunity to demonstrate what it means to be a true global leader on addressing the climate emergency,” Iwanowicz said of implementing the CLCPA. “Knitting together the coalition of the willing states is going to be really important, but it’s always been important.”

“We really need to double down on our draft plan,” he said of the Climate Action Council’s scoping plan.

In alignment with the agreement to limit global temperature rise to 1.5 degrees Celsius – a goal originally established in the Paris Agreement – New York City has pledged to reduce its own greenhouse gas emissions by 80% by 2050.

New York City Comptroller Brad Lander recently unveiled a Climate Dashboard to document the city’s progress in reducing emissions, as well as other climate goals.

“This dashboard tangibly assesses the danger our inaction poses towards our city and keeps us accountable to moving the needle on both mitigation and adaptation,” Louise Yeung, Chief Climate Officer for the New York City Comptroller, said in a statement. “Meeting our emissions reduction and resiliency goals is not an option for New York City – the future of our neighborhoods depends on the collective actions we take now.”

New York City’s Local Law 97, passed in 2019, is a nation-leading law that targets greenhouse gas emissions from existing buildings. The law, which has some major implementation questions of its own, calls for reducing building-based emissions by 40% by 2030, and by 80% by 2050.

Advocates and some elected officials have called on Mayor Eric Adams to ensure the full implementation of the law via appropriate funding and the regulations that his agencies are currently crafting. Implementation is an area where some see a greater role for the City Council, but the mayoral administration will nonetheless have significant say in whether the law has the kind of teeth its supporters thought it would.

“We’re forced to play defense to try and make sure he fully implements and enforces the law,” Sikora said of the new mayor. “He needs to move forward, not threaten forward progress.”

One of the “most ambitious” laws in the country, Gennaro called Local Law 97, in addition to the “Gas Ban Bill,” which prohibits the use of fossil fuel infrastructure in most new buildings in New York City starting in 2023, sets “a historic precedent for the country.”

Proposed Legislation and Advocacy Efforts
Some New York environmentalists are demanding stronger action by the state, calling on elected officials to ensure that New York reaches its goals set in the CLCPA, in part by passing the Build Public Renewables Act and the All-Electric Buildings Act

These are vital actions that should have passed in the last state legislative session,” Sikora said. “Governor Hochul needs to get it done and Speaker Heastie should stop doing the bidding of the oil and gas lobby by blocking these vital bills.”

The All-Electric Buildings Act is in essence the state version of New York City’s gan ban bill. It would amend the state energy construction code, prohibiting “infrastructure, building systems, or equipment used for the combustion of fossil fuels in new construction statewide no later than December 31, 2023 if the building is less than seven stories and July 1, 2027 if the building is seven stories or more.”

The Build Public Renewables Act, which passed the State Senate but not Assembly this past session, would require the New York Power Authority to phase out its fossil fuel power plants by 2030 and provide renewable energy to customers, enhancing the state authority’s mandate and ability to produce renewable energies instead of largely being a procuring entity to buy such power from private providers.

“The Build Public Renewables Act is instrumental to the Democratic Socialists of America’s vision of a national Green New Deal,” reads a statement from the New York City chapter of the Democratic Socialists of America, which says it wrote the BPRA. “If passed, it would provide a model for other states to build renewable energy at the speed and scale that the country - and human civilization - needs to prevent catastrophic climate crises.”

Assemblymember Robert Carroll, a Brooklyn Democrat and prime sponsor of the BPRA, continues to call for its passage, including at a hearing this past week on the bill. 

“Without the NYBPRA, New York is unlikely to meet the goals set forth by the Climate Leadership and Community Protection Act of 2019,” Caroll said in a press release earlier this year. “Governor Hochul has a unique opportunity to use NYPA's strong bond rating to create desperately needed renewable energy projects. Projects that will create good jobs and be revenue neutral. Relying solely on the private market is not only insufficient for our needs, it would also result in an environmental justice catastrophe, where renewable infrastructure will be built for affluent communities downstate, while poor and upstate communities will continue to be forced to rely on dirty energy from fossil fuels.”

Despite the alarming Supreme Court ruling, Seggos said he has confidence in the determination of New Yorkers to respond to climate change.

“New Yorkers are never hopeless,” Seggos said. “We get upset, but I think we have a can-do spirit here in New York, that when you see setbacks like this, it just renews our sense of ability to create fixes for big problems.”

Tue, 02 Aug 2022 15:01:00 -0500 en-gb text/html
Killexams : Fortinet’s 2+1 formula helps organisations to comply with PDPA faster

published : 20 Jul 2022 at 09:38

Current customers using FortiGate and FortiSIEM can activate 2FA instantly

The Personal Data Protection Act (PDPA), which came into full effect on June 1st, stipulates that Personal Data Security refers to the confidentiality, integrity, and availability of personal data. Organisations that handle personal data must take steps to prevent any loss, unauthorised or unlawful access, use, change, amendment or disclosure of the personal data. There must be safeguards to cover the administrative, technical, and physical risks, so organisations in Thailand are now investing in people, process and technology – the three resources necessary to create the desired outcomes of the PDPA journey.

The challenge becomes tougher after the pandemic; when Work-From-Anywhere model is prevalent and remote connections to access sensitive data and applications in the organisation are critical. With the PDPA in force, modern organisations have to put more priority on effective tools that allow secure, robust network access control and multi-factor authorisation policy management. This is a delicate, complex, and time-consuming implementation process. Technology and tools are the largest components of the data protection budget for many companies, and business leaders must plan ahead to invest in data privacy controls especially given that the PDPA effective date has passed.

Fortinet, the world leader in cybersecurity, today introduces a quick way to help small and medium-sized enterprises have a headstart in complying with the Act faster and easier. The “2 + 1” formula consists of two key solutions, FortiGate1 and FortiSIEM2, together with one additional feature, two-factor authentication (2FA) which needs the token software installed on smartphone. SMEs, and even large enterprises, can easily adopt 2FA with Fortinet’s zero trust access solution that can thoroughly address the data security protection issue that most organisations are concerned about. 

The advanced next-generation firewall FortiGate and FortiSIEM offer comprehensive range of functionalities that comply with security measures of PDPA as follows:

1. Able to gain strong control of access to personal data as well as personal data storage and processing appliances;

2. Able to establish policies relating to authorisation or assignment of access rights to personal data conveniently;

3. Able to view and operate the user access management efficiently, in order to allow only authorised persons access to personal data;

4. Able to identify the user responsibilities to prevent unauthorised access to information or prevent actions that may lead to data disclosure and duplication and data storage and processing device theft;

5. Able to store and examine all logs and histories of access, change, deletion and transfer of data.

Fortinet’s solution has the unique advantage that a 2FA feature can be enabled with Mobile Token instantly on the FortiGate firewall appliance. Organisations using the FortiGate firewall just need to purchase a licence for the additional 2FA feature. This is a simple and effective method whereby enterprises can provide multi-level authentication with their corporate accounts confidently. This makes it more seamless, convenient and cost-effective, allowing enterprises to rapidly accelerate their PDPA compliance journey.

“Many Fortinet devices already have a number of personal data protection capabilities,” explains Dr. Rattipong Putthacharoen, Senior Manager Systems Engineering Department of Fortinet. “This means customers who are currently using FortiGate firewall can turn on the embedded data protection functions to comply with PDPA requirements. Fortinet launches the new “2 + 1” formula to educate customers to focus on necessary technical solutions that help them meet the general PDPA requirements. It can be a faster starting point in constructing PDPA-supported processes. After that, organisations can opt to gradually expand their security capabilities according to their business needs, with step-by-step security risks reduction without increasing the burden on the IT team.”

Tue, 19 Jul 2022 21:38:00 -0500 text/html
Killexams : Live news updates: SEC probes Robinhood over compliance with short selling rules
© Bloomberg

Tesla rival Lucid Motors halved its 2022 production target on Wednesday, citing “extraordinary supply chain” challenges as it tries to ramp up production and meet “strong demand.”

The California-based group, backed by Saudi Arabia’s sovereign wealth fund, said 2022 production is now estimated between 6,000 and 7,000 cars, down from an earlier projection of 12,000 to 14,000 — which itself was a cut from a start-of-year forecast of 20,000 vehicles.

Shares of the electric carmaker were already down 50 per cent this year, reflecting numerous challenges in scaling up production of its Lucid Air — a luxury electric vehicle that starts at $89,000 and was named MotorTrend’s Car of the Year for 2022. Shares fell an additional 12 per cent after-hours on Wednesday.

“We’ve identified the primary bottlenecks, and we are taking appropriate measures — bringing our logistics operations in-house, adding key hires to the executive team, and restructuring our logistics and manufacturing organisation,” said chief executive Peter Rawlinson.

“We continue to see strong demand for our vehicles, with over 37,000 customer reservations, and I remain confident that we shall overcome these near-term challenges.”

The 37,000 reservations total $3.5bn in potential sales, the company said, but Lucid reported just $97.3mn of revenues in the June quarter — well below estimates at $147mn as it delivered only 679 cars in the three-month period.

Lucid CFO Sherry House said the company has $4.6bn in cash, “which we believe is sufficient to fund the Company well into 2023.”

Despite recent stock market woes, Lucid had a market valuation of $34bn at the time of its earnings on Tuesday, compared to $54bn at GM and $60.4bn at Ford, companies that routinely sell millions of vehicles per year.

Wed, 03 Aug 2022 10:29:00 -0500 en-GB text/html
Killexams : Live news updates: Jump in US factory orders reflects resilience in manufacturing sector
© Bloomberg

Tesla rival Lucid Motors halved its 2022 production target on Wednesday, citing “extraordinary supply chain” challenges as it tries to ramp up production and meet “strong demand.”

The California-based group, backed by Saudi Arabia’s sovereign wealth fund, said 2022 production is now estimated between 6,000 and 7,000 cars, down from an earlier projection of 12,000 to 14,000 — which itself was a cut from a start-of-year forecast of 20,000 vehicles.

Shares of the electric carmaker were already down 50 per cent this year, reflecting numerous challenges in scaling up production of its Lucid Air — a luxury electric vehicle that starts at $89,000 and was named MotorTrend’s Car of the Year for 2022. Shares fell an additional 12 per cent after-hours on Wednesday.

“We’ve identified the primary bottlenecks, and we are taking appropriate measures — bringing our logistics operations in-house, adding key hires to the executive team, and restructuring our logistics and manufacturing organisation,” said chief executive Peter Rawlinson.

“We continue to see strong demand for our vehicles, with over 37,000 customer reservations, and I remain confident that we shall overcome these near-term challenges.”

The 37,000 reservations total $3.5bn in potential sales, the company said, but Lucid reported just $97.3mn of revenues in the June quarter — well below estimates at $147mn as it delivered only 679 cars in the three-month period.

Lucid CFO Sherry House said the company has $4.6bn in cash, “which we believe is sufficient to fund the Company well into 2023.”

Despite recent stock market woes, Lucid had a market valuation of $34bn at the time of its earnings on Tuesday, compared to $54bn at GM and $60.4bn at Ford, companies that routinely sell millions of vehicles per year.

Wed, 03 Aug 2022 07:23:00 -0500 en-GB text/html
Killexams : EXPLAINED: What’s changing under Italy’s post-pandemic recovery plan?

Why do I have to complete a CAPTCHA?

Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.

What can I do to prevent this in the future?

If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware.

If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices.

Another way to prevent getting this page in the future is to use Privacy Pass. Check out the browser extension in the Chrome Web Store.

Tue, 19 Jul 2022 21:00:00 -0500 en-US text/html
NS0-520 exam dump and training guide direct download
Training Exams List