Full list of 250-511 real questions questions updated today

Killexams.com offers a person to download 100% totally free 250-511 test prep to test prior to you registering regarding full copy. Check our 250-511 examination sim which will enable you to encounter the real 250-511 Free Exam PDF. Passing the actual 250-511 examination will become a lot simple for you. killexams.com allows you three or more months free up-dates of 250-511 Administration of Symantec(TM) Data Loss Prevention 11 examination queries.

Exam Code: 250-511 Practice test 2022 by Killexams.com team
Administration of Symantec(TM) Data Loss Prevention 11
Symantec Administration test Questions
Killexams : Symantec Administration test Questions - BingNews https://killexams.com/pass4sure/exam-detail/250-511 Search results Killexams : Symantec Administration test Questions - BingNews https://killexams.com/pass4sure/exam-detail/250-511 https://killexams.com/exam_list/Symantec Killexams : Interview: Frank Cohen on FastSOA

InfoQ today publishes a one-chapter excerpt from Frank Cohen's book  "FastSOA". On this occasion, InfoQ had a chance to talk to Frank Cohen, creator of the FastSOA methodology, about the issues when trying to process XML messages, scalability, using XQuery in the middle tier, and document-object-relational-mapping.

InfoQ:
Can you briefly explain the ideas behind "FastSOA"?

Frank Cohen: For the past 5-6 years I have been investigating the impact an average Java developer's choice of technology, protocols, and patterns for building services has on the scalability and performance of the resulting application. For example, Java developers today have a choice of 21 different XML parsers! Each one has its own scalability, performance, and developer productivity profile. So a developer's choice on technology makes a big impact at runtime.

I looked at distributed systems that used message oriented middleware to make remote procedure calls. Then I looked at SOAP-based Web Services. And most recently at REST and AJAX. These experiences led me to look at SOA scalability and performance built using application server, enterprise service bus (ESB,) business process execution (BPEL,) and business integration (BI) tools. Across all of these technologies I found a consistent theme: At the intersection of XML and SOA are significant scalability and performance problems.

FastSOA is a test methodology and set of architectural patterns to find and solve scalability and performance problems. The patterns teach Java developers that there are native XML technologies, such as XQuery and native XML persistence engines, that should be considered in addition to Java-only solutions.

InfoQ: What's "Fast" about it? ;-)

FC: First off, let me describe the extent of the problem. Java developers building Web enabled software today have a lot of choices. We've all heard about Service Oriented Architecture (SOA), Web Services, REST, and AJAX techniques. While there are a LOT of different and competing definitions for these, most Java developers I speak to expect that they will be working with objects that message to other objects - locally or on some remote server - using encoded data, and often the encoded data is in XML format.

The nature of these interconnected services we're building means our software needs to handle messages that can be small to large and simple to complex. Consider the performance penalty of using a SOAP interface and streams XML parser (StAX) to handle a simple message schema where the message size grows. A modern and expensive multi-processor server that easily serves 40 to 80 Web pages per second serves as little as 1.5 to 2 XML requests per second.

Scalability Index

Without some sort of remediation Java software often slows to a crawl when handling XML data because of a mismatch between the XML schema and the XML parser. For instance, we checked one SOAP stack that instantiated 14,385 Java objects to handle a request message of 7000 bytes that contains 200 XML elements.

Of course, titling my work SlowSOA didn't sound as good. FastSOA offers a way to solve many of the scalability and performance problems. FastSOA uses native XML technology to provide service acceleration, transformation, and federation services in the mid-tier. For instance, an XQuery engine provides a SOAP interface for a service to handle decoding the request, transform the request data into something more useful, and routes the request to a Java object or another service.

InfoQ: One alternative to XML databinding in Java is the use of XML technologies, such as XPath or XQuery. Why muddy the water with XQuery? Why not just use Java technology?

FC:We're all after the same basic goals:

  1. Good scalability and performance in SOA and XML environments.
  2. Rapid development of software code.
  3. Flexible and easy maintenance of software code as the environment and needs change.

In SOA, Web Service, and XML domains I find the usual Java choices don't get me to all three goals.

Chris Richardson explains the Domain Model Pattern in his book POJOs in Action. The Domain Model is a popular pattern to build Web applications and is being used by many developers to build SOA composite applications and data services.

Platform

The Domain Model divides into three portions: A presentation tier, an application tier, and a data tier. The presentation tier uses a Web browser with AJAX and RSS capabilities to create a rich user interface. The browser makes a combination of HTML and XML requests to the application tier. Also at the presentation tier is a SOAP-based Web Service interface to allow a customer system to access functions directly, such as a parts ordering function for a manufacturer's service.

At the application tier, an Enterprise Java Bean (EJB) or plain-old Java object (Pojo) implements the business logic to respond to the request. The EJB uses a model, view, controller (MVC) framework - for instance, Spring MVC, Struts or Tapestry - to respond to the request by generating a response Web page. The MVC framework uses an object/relational (O/R) mapping framework - for instance Hibernate or Spring - to store and retrieve data in a relational database.

I see problem areas that cause scalability and performance problems when using the Domain Model in XML environments:

  • XML-Java Mapping requires increasingly more processor time as XML message size and complexity grows.
  • Each request operates the entire service. For instance, many times the user will check order status sooner than any status change is realistic. If the system kept track of the most latest response's time-to-live duration then it would not have to operate all of the service to determine the most previously cached response.
  • The vendor application requires the request message to be in XML form. The data the EJB previously processed from XML into Java objects now needs to be transformed back into XML elements as part of the request message. Many Java to XML frameworks - for instance, JAXB, XMLBeans, and Xerces ? require processor intensive transformations. Also, I find these frameworks challenging me to write difficult and needlessly complex code to perform the transformation.
  • The service persists order information in a relational database using an object-relational mapping framework. The framework transforms Java objects into relational rowsets and performs joins among multiple tables. As object complexity and size grows my research shows many developers need to debug the O/R mapping to Strengthen speed and performance.

In no way am I advocating a move away from your existing Java tools and systems. There is a lot we can do to resolve these problems without throwing anything out. For instance, we could introduce a mid-tier service cache using XQuery and a native XML database to mitigate and accelerate many of the XML domain specific requests.

Architecture

The advantage to using the FastSOA architecture as a mid-tier service cache is in its ability to store any general type of data, and its strength in quickly matching services with sets of complex parameters to efficiently determine when a service request can be serviced from the cache. The FastSOA mid-tier service cache architectures accomplishes this by maintaining two databases:

  • Service Database. Holds the cached message payloads. For instance, the service database holds a SOAP message in XML form, an HTML Web page, text from a short message, and binary from a JPEG or GIF image.
  • Policy Database. Holds units of business logic that look into the service database contents and make decisions on servicing requests with data from the service database or passing through the request to the application tier. For instance, a policy that receives a SOAP request validates security information in the SOAP header to validate that a user may receive previously cached response data. In another instance a policy checks the time-to-live value from a stock market price quote to see if it can respond to a request from the stock value stored in the service database.

FastSOA uses the XQuery data model to implement policies. The XQuery data model supports any general type of document and any general dynamic parameter used to fetch and construct the document. Used to implement policies the XQuery engine allows FastSOA to efficiently assess common criteria of the data in the service cache and the flexibility of XQuery allows for user-driven fuzzy pattern matches to efficiently represent the cache.

FastSOA uses native XML database technology for the service and policy databases for performance and scalability reasons. Relational database technology delivers satisfactory performance to persist policy and service data in a mid-tier cache provided the XML message schemas being stored are consistent and the message sizes are small.

InfoQ: What kinds of performance advantages does this deliver?

FC: I implemented a scalability test to contrast native XML technology and Java technology to implement a service that receives SOAP requests.

TPS for Service Interface

The test varies the size of the request message among three levels: 68 K, 202 K, 403 K bytes. The test measures the roundtrip time to respond to the request at the consumer. The test results are from a server with dual CPU Intel Xeon 3.0 Ghz processors running on a gigabit switched Ethernet network. I implemented the code in two ways:

  • FastSOA technique. Uses native XML technology to provide a SOAP service interface. I used a commercial XQuery engine to expose a socket interface that receives the SOAP message, parses its content, and assembles a response SOAP message.
  • Java technique. Uses the SOAP binding proxy interface generator from a popular commercial Java application server. A simple Java object receives the SOAP request from the binding, parses its content using JAXB created bindings, and assembles a response SOAP message using the binding.

The results show a 2 to 2.5 times performance improvement when using the FastSOA technique to expose service interfaces. The FastSOA method is faster because it avoids many of the mappings and transformations that are performed in the Java binding approach to work with XML data. The greater the complexity and size of the XML data the greater will be the performance improvement.

InfoQ: Won't these problems get easier with newer Java tools?

FC: I remember hearing Tim Bray, co-inventor of XML, extolling a large group of software developers in 2005 to go out and write whatever XML formats they needed for their applications. Look at all of the different REST and AJAX related schemas that exist today. They are all different and many of them are moving targets over time. Consequently, when working with Java and XML the average application or service needs to contend with three facts of life:

  1. There's no gatekeeper to the XML schemas. So a message in any schema can arrive at your object at any time.
  2. The messages may be of any size. For instance, some messages will be very short (less than 200 bytes) while some messages may be giant (greater than 10 Mbytes.)
  3. The messages use simple to complex schemas. For instance, the message schema may have very few levels of hierarchy (less than 5 children for each element) while other messages will have multiple levels of hierarchy (greater than 30 children.)

What's needed is an easy way to consume any size and complexity of XML data and to easily maintain it over time as the XML changes. This kind of changing landscape is what XQuery was created to address.

InfoQ: Is FastSOA only about improving service interface performance?

FC: FastSOA addresses these problems:

  • Solves SOAP binding performance problems by reducing the need for Java objects and increasing the use of native XML environments to provide SOAP bindings.
  • Introduces a mid-tier service cache to provide SOA service acceleration, transformation, and federation.
  • Uses native XML persistence to solve XML, object, and relational incompatibility.

FastSOA Pattern

FastSOA is an architecture that provides a mid-tier service binding, XQuery processor, and native XML database. The binding is a native and streams-based XML data processor. The XQuery processor is the real mid-tier that parses incoming documents, determines the transaction, communicates with the ?local? service to obtain the stored data, serializes the data to XML and stores the data into a cache while recording a time-to-live duration. While this is an XML oriented design XQuery and native XML databases handle non-XML data, including images, binary files, and attachments. An equally important benefit to the XQuery processor is the ability to define policies that operate on the data at runtime in the mid-tier.

Transformation

FastSOA provides mid-tier transformation between a consumer that requires one schema and a service that only provides responses using a different and incompatible schema. The XQuery in the FastSOA tier transforms the requests and responses between incompatible schema types.

Federation

Lastly, when a service commonly needs to aggregate the responses from multiple services into one response, FastSOA provides service federation. For instance, many content publishers such as the New York Times provide new articles using the Rich Site Syndication (RSS) protocol. FastSOA may federate news analysis articles published on a Web site with late breaking news stories from several RSS feeds. This can be done in your application but is better done in FastSOA because the content (news stores and RSS feeds) usually include time-to-live values that are ideal for FastSOA's mid-tier caching.

InfoQ: Can you elaborate on the problems you see in combining XML with objects and relational databases?

FC: While I recommend using a native XML database for XML persistence it is possible to be successful using a relational database. Careful attention to the quality and nature of your application's XML is needed. For instance, XML is already widely used to express documents, document formats, interoperability standards, and service orchestrations. There are even arguments put forward in the software development community to represent service governance in XML form and operated upon with XQuery methods. In a world full of XML, we software developers have to ask if it makes sense to use relational persistence engines for XML data. Consider these common questions:

  • How difficult is it to get XML data into a relational database?
  • How difficult is it to get relational data to a service or object that needs XML data? Can my database retrieve the XML data with lossless fidelity to the original XML data? Will my database deliver acceptable performance and scalability for operations on XML data stored in the database? Which database operations (queries, changes, complex joins) are most costs in terms of performance and required resources (cpus, network, memory, storage)?

Your answers to these questions forms a criteria by which it will make sense to use a relational database, or perhaps not. The alternative to relational engines are native XML persistence engines such as eXist, Mark Logic, IBM DB2 V9, TigerLogic, and others.

InfoQ: What are the core ideas behind the PushToTest methodology, and what is its relation to SOA?

FC: It frequently surprises me how few enterprises, institutions, and organizations have a method to test services for scalability and performance. One fortune 50 company asked a summer intern they wound up hiring to run a few performance tests when he had time between other assignments to check and identify scalability problems in their SOA application. That was their entire approach to scalability and performance testing.

The business value of running scalability and performance tests comes once a business formalizes a test method that includes the following:

  1. Choose the right set of test cases. For instance, the test of a multiple-interface and high volume service will be different than a service that handles periodic requests with huge message sizes. The test needs to be oriented to address the end-user goals in using the service and deliver actionable knowledge.
  2. Accurate test runs. Understanding the scalability and performance of a service requires dozens to hundreds of test case runs. Ad-hoc recording of test results is unsatisfactory. Test automation tools are plentiful and often free.
  3. Make the right conclusions when analyzing the results. Understanding the scalability and performance of a service requires understanding how the throughput measured as Transactions Per Second (TPS) at the service consumer changes with increased message size and complexity and increased concurrent requests.

All of this requires much more than an ad-hoc approach to reach useful and actionable knowledge. So I built and published the PushToTest SOA test methodology to help software architects, developers, and testers. The method is described on the PushToTest.com Web site and I maintain an open-source test automation tool called PushToTest TestMaker to automate and operate SOA tests.

PushToTest provides Global Services to its customers to use our method and tools to deliver SOA scalability knowledge. Often we are successful convincing an enterprise or vendor that contracts with PushToTest for primary research to let us publish the research under an open source license. For example, the SOA Performance kit comes with the encoding style, XML parser, and use cases. The kit is available for free get at: http://www.pushtotest.com/Downloads/kits/soakit.html and older kits are at http://www.pushtotest.com/Downloads/kits.

InfoQ: Thanks a lot for your time.


Frank Cohen is the leading authority for testing and optimizing software developed with Service Oriented Architecture (SOA) and Web Service designs. Frank is CEO and Founder of PushToTest and inventor of TestMaker, the open-source SOA test automation tool, that helps software developers, QA technicians and IT managers understand and optimize the scalability, performance, and reliability of their systems. Frank is author of several books on optimizing information systems (Java Testing and Design from Prentice Hall in 2004 and FastSOA from Morgan Kaufmann in 2006.) For the past 25 years he led some of the software industry's most successful products, including Norton Utilities for the Macintosh, Stacker, and SoftWindows. He began by writing operating systems for microcomputers, helping establish video games as an industry, helping establish the Norton Utilities franchise, leading Apple's efforts into middleware and Internet technologies, and was principal architect for the Sun Community Server. He cofounded Inclusion.net (OTC: IINC), and TuneUp.com (now Symantec Web Services.) Contact Frank at fcohen@pushtotest.com and http://www.pushtotest.com.

Sun, 05 Jun 2022 15:49:00 -0500 en text/html https://www.infoq.com/articles/fastsoa-cohen/
Killexams : Making DBE Interstate Certification Faster and Easier – Proposed Changes to Federal DBE Program

Monday, August 8, 2022

Preliminary NoteThe U.S. Department of Transportation recently released a long-awaited Notice of Proposed Rulemaking to modernize the Disadvantaged Business Enterprise (DBE) program regulations. This blog is part of a series looking at some of the significant proposed changes. A copy of all of the proposed changes can be found here: https://www.federalregister.gov/documents/2022/07/21/2022-14586/disadvantaged-business-enterprise-and-airport-concession-disadvantaged-business-enterprise-program.

A frequent area of frustration for DBEs is the interstate certification process (the process by which a DBE can get certified in a state other than their home state).  This frustration stems from a slow-moving process with multiple requests for information that go beyond what is permitted by the rules.

Well, the USDOT shares those frustrations and has proposed sweeping changes to simplify the process.  In analyzing its appeal decisions, the USDOT found that it reversed a whopping 77% of appeals involving denials of interstate certification.  Of those reversals, 35% were because the certifier demand that the firm provide information that went beyond what is allowed in the current regulations, 49 C.F.R. § 26.85(c).  Another 26% of those appeals involved certifiers who denied the application for interstate certification for no reason!

The proposed rules require the state where interstate certification is sought (State B) to accept the home state’s certification- establishing reciprocity. The required materials for an application for interstate certification are also greatly reduced- a cover letter, a copy of screenshot showing their company’s listing in their home state’s UCP DBE directory, and a signed Declaration of Eligibility.

State B will have 10 business days to verify the certification and grant interstate certification. This is also a big change, where currently an interstate certification application can drag on for months.  Many companies apply for interstate certification to bid on a particular job or project.  This new deadline will help those companies ensure that they will receive a timely response to their application. 

Have thoughts or suggestions on the proposed PNW rules?  You can make your voice heard by offering your comment here:  https://www.regulations.gov/docket/DOT-OST-2022-0051/document.

©2022 Strassburger McKenna Gutnick & GefskyNational Law Review, Volume XII, Number 220

Mon, 08 Aug 2022 04:04:00 -0500 en text/html https://www.natlawreview.com/article/making-dbe-interstate-certification-faster-and-easier-proposed-changes-to-federal
Killexams : Universities Take Advantage of Thin PCs

Colleges deploy thin clients to save money, Strengthen security and streamline desktop management.

Sooner or later, colleges and universities have to address their aging PC fleets. And many are turning to thin clients for their computing needs because they offer many advantages over PCs.

Advances in thin-client computing offer users the power and look and feel of regular PCs — at a lower price — and that’s helping drive thin-client adoption in higher education. Besides cost savings, thin clients provide colleges many benefits, including simpler IT management. IT departments can also troubleshoot and manage the computing infrastructure from a central location: the data center.

Thin clients also Strengthen security. Students using thin clients in a computer lab can’t change settings or install unauthorized software. If a student accidentally downloads a virus or spyware on a thin client, the infection can’t spread.

“Public computers are very much at risk, and thin clients reduce those risks,” says Gartner analyst Mark Margevicius.

For administrative users, data is better protected from thieves because thin clients lack hard drives and have no data stored locally. Thin clients also aid with continuity of operations planning and can ensure 24x7 uptime because servers can be configured to be redundant. The devices are also more eco-friendly and consume less power than regular computers.

Tech manufacturers have made several innovations in thin-client technology in latest years. They are no longer the dumb terminals of the past. Today’s thin clients have a processor, RAM and Flash memory, allowing applications to run locally and boosting the performance of web browsing, video and other multimedia applications.

Colleges also have numerous thin-client architectures from which to pick. In the traditional thin-client model, through software such as Microsoft’s Terminal Services or Citrix System’s XenApp, keystrokes and mouse clicks are sent back and forth between the thin client and the data center. The servers perform the processing and send a view of the screen to the user’s desktop.

Other thin-client alternatives include blade PCs, which are real PCs housed like servers in a data center. Users connect to the PCs through thin-client devices. Another thin-client option is desktop virtualization, or virtual desktop infrastructure (VDI). Desktop virtualization partitions servers into separate virtual machines, which gives users “virtual computers” with an operating system and applications.

Here’s a look at how thin clients have made three colleges more productive.

Cleveland State University

When server specialist Jeff Grigsby learned he could provide students a PC-like experience with a full operating system and access to multimedia applications while reaping the numerous benefits of thin clients, he was sold.

Cleveland State University, which provides students with 350 computers in seven computer labs, began swapping out PCs with thin clients last fall to save money. Because students demand good multimedia performance, Grigsby deployed a thin-client architecture called OS streaming, which gives students access to a full operating system and applications on the thin clients.

When a student logs in, a server grabs a computer image — featuring Windows XP, Microsoft Office and other applications — from the college’s storage area network (SAN) and delivers it over the network to a high-end thin-client device.

The OS and applications run locally on the thin client, but all the data is stored on the SAN. When students log in, they are each given 2 gigabytes of cache storage on the SAN, which stores temporary files while the students are using applications.

“We needed the full XP operating system because our students want to use media,” Grigsby says. “Audio and video are big things that they use in the labs, and this provides a good experience for them.”

Worldwide thin-client sales are expected to grow from 2.9 million in 2008 to 3.4 million in 2009, an increase of 17.5 percent.

Source: IDC

Cleveland State began the migration last summer. So far it has switched 220 of the 350 computers to thin clients and hopes to finish the migration by fall 2010. The IT department standardized on Wyse V00LE thin clients, which feature a 1.2 gigahertz processor and 1 gigabyte of RAM, and Citrix’s streaming OS software, called Citrix Provisioning Server for Desktops.

The IT department also purchased two Hewlett-Packard BladeSystem c-Class server blades, featuring Intel dual-core processors and 4GB of RAM. The two blades increase reliability: If one blade fails, the other can handle the workload and prevent downtime, Grigsby says.

Nearly one year into the deployment, Cleveland State has already seen a return on investment. Compared with the cost of PCs, an entire thin-client solution — including servers, software licenses and the thin clients themselves — saves the college about $200 per client. When the project is complete, Grigsby expects thin clients will save the university about $50,000 annually.

The thin clients are easy to manage, Grigsby says, because there is only one computer image. As for security, Grigsby likes that the image is in read-only mode, so students can’t change the computer settings.

Grigsby is making improvements to the system. The computer image with the OS and applications was initially 50GB. To speed the OS streaming, Grigsby is reducing the size of the image by moving large applications to a more traditional thin-client architecture. He’s already moved some applications to Citrix XenApp, which has reduced the computer image to 32GB. He hopes to speed the streaming OS even further this summer by moving more large applications to XenApp and reducing the image to 8 to 12GB.

A need to update the PC fleet led the University of Pittsburgh’s Paul Milazzo to opt for thin clients. “They use very little electricity, have no fans, generate no heat and they’re easy to manage.”

Photo Credit: Jeff Swensen

University of Pittsburgh

Last fall, as aging PCs in the student computer lab broke down every other day, network administrator Paul Milazzo knew his school could no longer hold off on purchasing new technology.

Students and staff in the University of Pittsburgh’s School of Dental Medicine rely on the lab to check e-mail and browse the web. Upper-level students who see patients also use the lab to access the school’s online patient management system. The lab’s 30 beat-up computers were nearly six years old. Every other day, Milazzo and his IT colleagues were called in to fix something: a crashed hard drive, failed CD-ROM, sticky keyboard or dead mouse.

All the breakdowns and troubleshooting helped make the case for Milazzo to purchase thin clients. This past January, the school switched to HP thin clients on an HP ProLiant DL380 dual-quad core server with 12GB of RAM running Microsoft Terminal Services.

“We looked for something small and simple with no moving parts,” says Milazzo, now a systems architect for the university. “We knew there would be energy and cost savings, so instead of buying all new desktops, we decided to centralize everything.”

The HP t5630 thin client, built with a 1GHz processor and 1GB of RAM, runs Windows XP embedded. Microsoft Terminal Services software on Windows Server 2003 uses Remote Desktop Protocol (RDP), a communications protocol that transfers a user’s mouse and keyboard clicks to the server, which in turn, sends the graphical output to the user’s thin-client device.

To speed multimedia performance, the IT staff installed some applications locally on each thin client’s Flash drive, including Adobe Flash and Apple QuickTime.

“It works great,” Milazzo says. “XP Embedded’s look is familiar to users. Software like Word and PowerPoint works like it does on any other machine, and they love that it boots up in six seconds. You can’t tell you’re on a thin client, and that’s what we were shooting for.”

Milazzo made the thin clients easy to use. When students log on, four icons pop up, giving students access to applications. The first icon is Internet Explorer for web surfing. The three other icons let students log on to terminal sessions.

Some thin clients can consume as little as 6.6 watts of energy, compared with desktops that can consume as much as 150 watts.

Source: Wyse Technology

The first session gives access to the live patient management system, while the second session connects to a test patient management system, set up to train students on how to use the application. The third session gives students access to Microsoft Office, Adobe Photoshop and other general applications.

The IT staff disabled the USB ports on the thin clients to prevent students from making copies of documents on removable media, such as thumb drives. The dental school must protect patient data because of the Health Insurance Portability and Accountability Act. To save documents, students have to visit the IT department, where staffers make sure the files being saved are not sensitive. “There’s no way around it because of HIPAA regulations,” Milazzo says.

While it’s too soon to determine cost savings, Milazzo says the school is taking advantage of central IT management. With Symantec’s Altiris management software, the IT department can manage the thin clients remotely — for instance, to power down devices or set a common screen saver.

Cost is the prime driver for Murray State University’s Tim McNeely. “Long term, our cost is lower. Instead of replacing PCs every five years, we just have to replace three servers every five to seven years, and that’s cheaper.”

Photo Credit: Tamara Reynolds

Murray State University

When Murray State’s College of Humanities and Fine Arts needed to replace 150 aging PCs in its classrooms and computer labs, the IT department switched to thin-client computing for one reason: It wouldn’t take a big, fat chunk out of the budget.

Tim McNeely, the college’s technology coordinator, first considered buying new PCs, but after doing some research he realized thin clients are more affordable. It would have cost $150,000 to replace all the PCs in five classrooms and two small computer labs. Buying an entire thin-client solution — three servers, software licensing and the less expensive thin-client devices — cost only $98,000, a $52,000 savings.

The Kentucky school expects to save even more money in the long run. Because thin clients have no moving parts, such as hard drives and fans, they last several years longer than PCs. When the servers need replacing or software needs upgrading, the college will need to spend only about $8,000 a year.

Microsoft’s Terminal Services will support between 25 and 40 users per processor with 3 to 4 GB of RAM. A virtual desktop infrastructure can typically support six to eight virtual machines per processor core.

Source: CDW, VMware

“Long term, our cost is lower,” McNeely says. “Instead of replacing PCs every five years, we just have to replace three servers every five to seven years, and that’s cheaper.”

McNeely says Murray State’s five classrooms with PCs in the College of Humanities and Fine Arts are packed all day with students who need computers to do research and write papers.

Over the past two years, McNeely has replaced the PCs with a Terminal Services thin-client system featuring three IBM System x3650 rack servers, Wyse S30 thin clients with 400 megahertz processors and 128 megabytes of RAM, and 19-inch LG flat-screen monitors.

Two servers run terminal sessions that provide users access to Microsoft Office and the web. The workload is split evenly between the two servers, but if one server goes down, the other can handle the entire load without a major performance hit, McNeely says. The third server provides users access to a statistics application used by students in government, law, international affairs and psychology. The college also runs a virtual server for managing all the sessions, he says.

The ability to remotely and centrally manage the thin clients has saved the IT staff a lot of time, McNeely says. In the past, if professors wanted to add new software, the IT staff would have to schedule around class times to install the software in all 30 computers in a room. Now, with thin clients and remote tools, the IT staff can install the software on the servers immediately.

“It’s definitely decreased our workload. When we’re upgrading software, we do it on three servers without having to touch the 150 stations,” he says.

The college also expects to save significant dollars because the thin clients use very little power and generate very little heat, which saves on air-conditioning costs. Classrooms are also quieter because thin clients don’t have fans, he says.

McNeely is tapping his experience to help three other schools within the university with their thin-client implementations. Students have adapted to the new technology quickly, with no complaints or questions, he says.

“It was very important to us that the user experience be as easy and responsive as the desktop computers we were replacing,” he says.

Ask Some Good Questions

IDC analyst Bob O’Donnell says iT departments should ask these questions to help them settle on a thin-client solution:

1. What are the computing needs of your users?Determine your priorities first, and that will help you determine what kind of architecture and thin-client devices you choose.

2. What kind of infrastructure do you already have? Your existing data center technology should drive your decision.

3. What kind of Microsoft software licenses do you have? Some licenses let iT departments share full operating systems in a virtual-client environment.

4. How good is your IT staff? Implementing the traditional thin-client architecture is easy, but other architectures (such as virtual desktop infrastructure and streaming OS, for which an organization may use a common computer image) can be complex. Make sure the IT department has the expertise to implement the technology you choose.

5. Who will manage the thin clients? The iT staff who manage the servers, or the iT staff who manage the clients? Work out the turf battles in advance.

Choose an Operating System

Operating system choices for thin clients include Windows CE, Windows XP Embedded, Linux and custom OSes. Here’s a rundown:

Windows CE. This OS has a smaller footprint than XP embedded and is capable of internet browsing and multimedia applications. it supports Windows, mainframe and basic web applications, but it has limited support for peripherals.

XP Embedded. This OS features an interface similar to XP, is good for Windows 32-bit applications and supports multimedia applications, internet browsing and has extensive hardware peripheral support. it also supports connectivity to mainframe and feature-rich web applications. On the negative side, it has a large memory requirement.

Linux. An embedded linux operating system requires a small amount of memory. it’s customizable with open-source software and components and supports internet browsing and multimedia capabilities. it has limited peripheral support.

Bolster Multimedia Support

IT departments that use traditional thin-client architecture can ensure quality multimedia through Wyse TCX Multimedia technology. in the past, playing audio or video over independent Computing Architecture or Remote Display Protocol resulted in spotty performance. With Wyse TCX Multimedia 3.0, users can run multimedia applications locally on the thin client, delivering a smooth multimedia experience. The Wyse TCX software works with Citrix XenApp, Microsoft Terminal Server and VMware View.

Tue, 02 Nov 2021 12:53:00 -0500 Wylie Wong en text/html https://edtechmagazine.com/higher/article/2009/05/universities-take-advantage-thin-pcs
Killexams : Workload Scheduling Software Market Size and Growth 2022 Analysis Report by Development Plans, Manufactures, Latest Innovations and Forecast to 2028

The MarketWatch News Department was not involved in the creation of this content.

Aug 03, 2022 (The Expresswire) -- "Final Report will add the analysis of the impact of COVID-19 on this industry."

Global “Workload Scheduling Software Market” 2022 report presents a comprehensive study of the entire Global market including market size, share trends, market dynamics, and overview by segmentation by types, applications, manufactures and geographical regions. The report offers the most up-to-date industry data on the real market situation and future outlook for the Workload Scheduling Software market. The report also provides up-to-date historical market size data for the period and an illustrative forecast to 2028 covering key market aspects like market value and volume for Workload Scheduling Software industry.

Get a sample PDF of the Report - https://www.absolutereports.com/enquiry/request-sample/21317277

Market Analysis and Insights: Global Workload Scheduling Software Market

System management software is an application that manages all applications of an enterprise such as scheduling and automation, event management, workload scheduling, and performance management. Workload scheduling software is also known as batch scheduling software. It automates, monitors, and controls jobs or workflows in an organization. It allows the execution of background jobs that are unattended by the system administrator, aligning IT with business objectives to Strengthen an organization's performance and reduce the total cost of ownership. This process is known as batch processing. Workload scheduling software provides a centralized view of operations to the system administrator at various levels: project, organizational, and enterprise.
The global Workload Scheduling Software market size is projected to reach USD million by 2028, from USD million in 2021, at a CAGR of during 2022-2028.
According to the report, workload scheduling involves automation of jobs, in which tasks are executed without human intervention. Solutions like ERP and customer relationship management (CRM) are used in organizations across the globe. ERP, which is a business management software, is a suite of integrated applications that is being used by organizations in various sectors for data collection and interpretation related to business activities such as sales and inventory management. CRM software is used to manage customer data and access business information.

The major players covered in the Workload Scheduling Software market report are:

● BMC Software ● Broadcom ● IBM ● VMWare ● Adaptive Computing ● ASG Technologies ● Cisco ● Microsoft ● Stonebranch ● Wrike ● ServiceNow ● Symantec ● Sanicon Services ● Cloudify

Get a sample Copy of the Workload Scheduling Software Market Report 2022

Global Workload Scheduling Software Market: Drivers and Restrains

The research report has incorporated the analysis of different factors that augment the market’s growth. It constitutes trends, restraints, and drivers that transform the market in either a positive or negative manner. This section also provides the scope of different segments and applications that can potentially influence the market in the future. The detailed information is based on current trends and historic milestones. This section also provides an analysis of the volume of production about the global market and about each type from 2017 to 2028. This section mentions the volume of production by region from 2017 to 2028. Pricing analysis is included in the report according to each type from the year 2017 to 2028, manufacturer from 2017 to 2022, region from 2017 to 2022, and global price from 2017 to 2028.

A thorough evaluation of the restrains included in the report portrays the contrast to drivers and gives room for strategic planning. Factors that overshadow the market growth are pivotal as they can be understood to devise different bends for getting hold of the lucrative opportunities that are present in the ever-growing market. Additionally, insights into market expert’s opinions have been taken to understand the market better.

To Understand How Covid-19 Impact Is Covered in This Report - https://www.absolutereports.com/enquiry/request-covid19/21317277

Global Workload Scheduling Software Market: Segment Analysis

The research report includes specific segments by region (country), by manufacturers, by Type and by Application. Each type provides information about the production during the forecast period of 2017 to 2028. By Application segment also provides consumption during the forecast period of 2017 to 2028. Understanding the segments helps in identifying the importance of different factors that aid the market growth.

Segment by Type

● On-Premises ● Cloud-Based

Segment by Application

● Large Enterprises ● Small And Medium-Sized Enterprises (SMEs) ● Government Organizations

Workload Scheduling Software Market Key Points:

● Characterize, portray and Forecast Workload Scheduling Software item market by product type, application, manufactures and geographical regions. ● provide venture outside climate investigation. ● provide systems to organization to manage the effect of COVID-19. ● provide market dynamic examination, including market driving variables, market improvement requirements. ● provide market passage system examination to new players or players who are prepared to enter the market, including market section definition, client investigation, conveyance model, item informing and situating, and cost procedure investigation. ● Stay aware of worldwide market drifts and provide examination of the effect of the COVID-19 scourge on significant locales of the world. ● Break down the market chances of partners and furnish market pioneers with subtleties of the cutthroat scene.

Inquire or Share Your Questions If Any before the Purchasing This Report - https://www.absolutereports.com/enquiry/pre-order-enquiry/21317277

Geographical Segmentation:

Geographically, this report is segmented into several key regions, with sales, revenue, market share, and Workload Scheduling Software market growth rate in these regions, from 2015 to 2028, covering

● North America (United States, Canada and Mexico) ● Europe (Germany, UK, France, Italy, Russia and Turkey etc.) ● Asia-Pacific (China, Japan, Korea, India, Australia, Indonesia, Thailand, Philippines, Malaysia, and Vietnam) ● South America (Brazil etc.) ● Middle East and Africa (Egypt and GCC Countries)

Some of the key questions answered in this report:

● Who are the worldwide key Players of the Workload Scheduling Software Industry? ● How the opposition goes in what was in store connected with Workload Scheduling Software? ● Which is the most driving country in the Workload Scheduling Software industry? ● What are the Workload Scheduling Software market valuable open doors and dangers looked by the manufactures in the worldwide Workload Scheduling Software Industry? ● Which application/end-client or item type might look for gradual development possibilities? What is the portion of the overall industry of each kind and application? ● What centered approach and imperatives are holding the Workload Scheduling Software market? ● What are the various deals, promoting, and dissemination diverts in the worldwide business? ● What are the key market patterns influencing the development of the Workload Scheduling Software market? ● Financial effect on the Workload Scheduling Software business and improvement pattern of the Workload Scheduling Software business?

Purchase this Report (Price 2900 USD for a Single-User License) -https://www.absolutereports.com/purchase/21317277

Detailed TOC of Global Workload Scheduling Software Market Research Report 2022

1 Workload Scheduling Software Market Overview

1.1 Product Overview and Scope

1.2 Segment by Type

1.2.1 Global Market Size Growth Rate Analysis by Type 2022 VS 2028

1.3 Workload Scheduling Software Segment by Application

1.3.1 Global Consumption Comparison by Application: 2022 VS 2028

1.4 Global Market Growth Prospects

1.4.1 Global Revenue Estimates and Forecasts (2017-2028)

1.4.2 Global Production Capacity Estimates and Forecasts (2017-2028)

1.4.3 Global Production Estimates and Forecasts (2017-2028)

1.5 Global Market Size by Region

1.5.1 Global Market Size Estimates and Forecasts by Region: 2017 VS 2021 VS 2028

1.5.2 North America Workload Scheduling Software Estimates and Forecasts (2017-2028)

1.5.3 Europe Estimates and Forecasts (2017-2028)

1.5.4 China Estimates and Forecasts (2017-2028)

1.5.5 Japan Estimates and Forecasts (2017-2028)

2 Workload Scheduling Software Market Competition by Manufacturers

2.1 Global Production Capacity Market Share by Manufacturers (2017-2022)

2.2 Global Revenue Market Share by Manufacturers (2017-2022)

2.3 Market Share by Company Type (Tier 1, Tier 2 and Tier 3)

2.4 Global Average Price by Manufacturers (2017-2022)

2.5 Manufacturers Production Sites, Area Served, Product Types

2.6 Market Competitive Situation and Trends

2.6.1 Market Concentration Rate

2.6.2 Global 5 and 10 Largest Workload Scheduling Software Players Market Share by Revenue

2.6.3 Mergers and Acquisitions, Expansion

3 Workload Scheduling Software Production Capacity by Region

3.1 Global Production Capacity of Workload Scheduling Software Market Share by Region (2017-2022)

3.2 Global Revenue Market Share by Region (2017-2022)

3.3 Global Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.4 North America Production

3.4.1 North America Production Growth Rate (2017-2022)

3.4.2 North America Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.5 Europe Production

3.5.1 Europe Production Growth Rate (2017-2022)

3.5.2 Europe Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.6 China Production

3.6.1 China Production Growth Rate (2017-2022)

3.6.2 China Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.7 Japan Production

3.7.1 Japan Production Growth Rate (2017-2022)

3.7.2 Japan Production Capacity, Revenue, Price and Gross Margin (2017-2022)

4 Global Workload Scheduling Software Market Consumption by Region

4.1 Global Consumption by Region

4.1.1 Global Consumption by Region

4.1.2 Global Consumption Market Share by Region

4.2 North America

4.2.1 North America Consumption by Country

4.2.2 United States

4.2.3 Canada

4.3 Europe

4.3.1 Europe Consumption by Country

4.3.2 Germany

4.3.3 France

4.3.4 U.K.

4.3.5 Italy

4.3.6 Russia

4.4 Asia Pacific

4.4.1 Asia Pacific Consumption by Region

4.4.2 China

4.4.3 Japan

4.4.4 South Korea

4.4.5 China Taiwan

4.4.6 Southeast Asia

4.4.7 India

4.4.8 Australia

4.5 Latin America

4.5.1 Latin America Consumption by Country

4.5.2 Mexico

4.5.3 Brazil

Get a sample Copy of the Workload Scheduling Software Market Report 2022

5 Workload Scheduling Software Market Segment by Type

5.1 Global Production Market Share by Type (2017-2022)

5.2 Global Revenue Market Share by Type (2017-2022)

5.3 Global Price by Type (2017-2022)

6 Workload Scheduling Software Market Segment by Application

6.1 Global Production Market Share by Application (2017-2022)

6.2 Global Revenue Market Share by Application (2017-2022)

6.3 Global Price by Application (2017-2022)

7 Workload Scheduling Software Market Key Companies Profiled

7.1 Manufacture 1

7.1.1 Manufacture 1 Corporation Information

7.1.2 Manufacture 1 Product Portfolio

7.1.3 Manufacture 1 Production Capacity, Revenue, Price and Gross Margin (2017-2022)

7.1.4 Manufacture 1 Main Business and Markets Served

7.1.5 Manufacture 1 latest Developments/Updates

7.2 Manufacture 2

7.2.1 Manufacture 2 Corporation Information

7.2.2 Manufacture 2 Product Portfolio

7.2.3 Manufacture 2 Production Capacity, Revenue, Price and Gross Margin (2017-2022)

7.2.4 Manufacture 2 Main Business and Markets Served

7.2.5 Manufacture 2 latest Developments/Updates

7.3 Manufacture 3

7.3.1 Manufacture 3 Corporation Information

7.3.2 Manufacture 3 Product Portfolio

7.3.3 Manufacture 3 Production Capacity, Revenue, Price and Gross Margin (2017-2022)

7.3.4 Manufacture 3 Main Business and Markets Served

7.3.5 Manufacture 3 latest Developments/Updates

8 Workload Scheduling Software Manufacturing Cost Analysis

8.1 Key Raw Materials Analysis

8.1.1 Key Raw Materials

8.1.2 Key Suppliers of Raw Materials

8.2 Proportion of Manufacturing Cost Structure

8.3 Manufacturing Process Analysis of Workload Scheduling Software

8.4 Workload Scheduling Software Industrial Chain Analysis

9 Marketing Channel, Distributors and Customers

9.1 Marketing Channel

9.2 Workload Scheduling Software Distributors List

9.3 Workload Scheduling Software Customers

10 Market Dynamics

10.1 Workload Scheduling Software Industry Trends

10.2 Workload Scheduling Software Market Drivers

10.3 Workload Scheduling Software Market Challenges

10.4 Workload Scheduling Software Market Restraints

11 Production and Supply Forecast

11.1 Global Forecasted Production of Workload Scheduling Software by Region (2023-2028)

11.2 North America Workload Scheduling Software Production, Revenue Forecast (2023-2028)

11.3 Europe Workload Scheduling Software Production, Revenue Forecast (2023-2028)

11.4 China Workload Scheduling Software Production, Revenue Forecast (2023-2028)

11.5 Japan Workload Scheduling Software Production, Revenue Forecast (2023-2028)

12 Consumption and Demand Forecast

12.1 Global Forecasted Demand Analysis of Workload Scheduling Software

12.2 North America Forecasted Consumption of Workload Scheduling Software by Country

12.3 Europe Market Forecasted Consumption of Workload Scheduling Software by Country

12.4 Asia Pacific Market Forecasted Consumption of Workload Scheduling Software by Region

12.5 Latin America Forecasted Consumption of Workload Scheduling Software by Country

13 Forecast by Type and by Application (2023-2028)

13.1 Global Production, Revenue and Price Forecast by Type (2023-2028)

13.1.1 Global Forecasted Production of Workload Scheduling Software by Type (2023-2028)

13.1.2 Global Forecasted Revenue of Workload Scheduling Software by Type (2023-2028)

13.1.3 Global Forecasted Price of Workload Scheduling Software by Type (2023-2028)

13.2 Global Forecasted Consumption of Workload Scheduling Software by Application (2023-2028)

13.2.1 Global Forecasted Production of Workload Scheduling Software by Application (2023-2028)

13.2.2 Global Forecasted Revenue of Workload Scheduling Software by Application (2023-2028)

13.2.3 Global Forecasted Price of Workload Scheduling Software by Application (2023-2028)

14 Research Finding and Conclusion

15 Methodology and Data Source

15.1 Methodology/Research Approach

15.1.1 Research Programs/Design

15.1.2 Market Size Estimation

15.1.3 Market Breakdown and Data Triangulation

15.2 Data Source

15.2.1 Secondary Sources

15.2.2 Primary Sources

15.3 Author List

15.4 Disclaimer

For Detailed TOC - https://www.absolutereports.com/TOC/21317277#TOC

Contact Us:

Absolute Reports

Phone : US +1 424 253 0807

UK +44 203 239 8187

Email : sales@absolutereports.com

Web : https://www.absolutereports.com

Our Other Reports:

High Performance Tape Market Size and Growth 2022 Analysis Report by Development Plans, Manufactures, Latest Innovations and Forecast to 2028

Global Electroplated Diamond Wire for Photovoltaic Wafer Market Size and Growth 2022 Analysis Report by Dynamics, SWOT Analysis, CAGR Status, Industry Developments and Forecast to 2028

High Performance Tape Market Size and Growth 2022 Analysis Report by Development Plans, Manufactures, Latest Innovations and Forecast to 2028

Global Handle Paper Bags Market 2022 Size, Share, Business Opportunities, Trends, Growth Factors, Development, Key Players Segmentation and Forecast to 2028

Global GSM Gateway Market 2022 Size, Latest Trends, Industry Analysis, Growth Factors, Segmentation by Data, Emerging Key Players and Forecast to 2028

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Workload Scheduling Software Market Size and Growth 2022 Analysis Report by Development Plans, Manufactures, Latest Innovations and Forecast to 2028

COMTEX_411474152/2598/2022-08-03T02:15:57

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Tue, 02 Aug 2022 18:15:00 -0500 en-US text/html https://www.marketwatch.com/press-release/workload-scheduling-software-market-size-and-growth-2022-analysis-report-by-development-plans-manufactures-latest-innovations-and-forecast-to-2028-2022-08-03
Killexams : Weaving a New Web

In 1969 scientists at the University of California, Los Angeles, transmitted a couple of bits of data between two computers, and thus the Internet was born. Today about 2 billion people access the Web regularly, zipping untold exabytes of data (that’s 10^18 pieces of information) through copper and fiber lines around the world. In the United States alone, an estimated 70 percent of the population owns a networked computer. That number grows to 80 percent if you count smartphones, and more and more people jump online every day. But just how big can the information superhighway get before it starts to buckle? How much growth can the routers and pipes handle? The challenges seem daunting. The current Internet Protocol (IP) system that connects global networks has nearly exhausted its supply of 4.3 billion unique addresses. Video is projected to account for more than 90 percent of all Internet traffic by 2014, a sudden new demand that will require a major increase in bandwidth. Malicious software increasingly threatens national security. And consumers may face confusing new options as Internet service providers consider plans to create a “fast lane” that would prioritize some Web sites and traffic types while others are routed more slowly.

Fortunately, thousands of elite network researchers spend their days thinking about these thorny issues. Last September DISCOVER and the National Science Foundation convened four of them for a lively discussion, hosted by the Georgia Institute of Technology in Atlanta, on the next stage of Internet evolution and how it will transform our lives. DISCOVER editor in chief Corey S. Powell joined Cisco’s Paul Connolly, who works with Internet service providers (ISPs); Georgia Tech computer scientist Nick Feamster, who specializes in network security; William Lehr of MIT, who studies wireless technology, Internet architecture, and the economic and policy implications of online access; and Georgia Tech’s Ellen Zegura, an expert on mobile networking (click here for video of the event).

Powell: Few people anticipated Google’s swift rise, the vast influence of social media, or the Web’s impact on the music, television, and publishing industries. How do we even begin to map out what will come next?

Lehr: One thing the Internet has taught us thus far is that we can’t predict it. That’s wonderful because it allows for the possibility of constantly reinventing it.

Zegura: Our response to not being able to predict the Internet is to try to make it as flexible as possible. We don’t know for sure what will happen, so if we can create a platform that can accommodate many possible futures, we can position ourselves for whatever may come. The current Internet has held up quite well, but it is ready for some changes to prepare it to serve us for the next 30, 40, or 100 years. By building the ability to innovate into the network, we don’t have to know exactly what’s coming down the line. That said, Nick and others have been working on a test bed called GENI, the Global Environment for Network Innovations project that will allow us to experiment with alternative futures.

Powell: Almost like using focus groups to redesign the Internet?

Zegura: That’s not a bad analogy, although some of the testing might be more long-term than a traditional focus group.

Powell: What are some major online trends, and what do they suggest about where we are headed?

Feamster: We know that paths are getting shorter: From point A to point B, your traffic is going through fewer and fewer Internet service providers. And more and more data are moving into the cloud. Between now and 2020, the number of people on the Internet is expected to double. For those who will come online in the next 10 years or so, we don’t know how they’re going to access the Internet, how they’re going to use it, or what kinds of applications they might use. One trend is the proliferation of mobile devices: There could be more than a billion cell phones in India alone by 2015.

Powell: So there’s a whole universe of wireless connectivity that could potentially become an Internet universe?

Feamster: Absolutely. We know things are going to look vastly different from people sitting at desktops or laptops and browsing the Web. Also, a lot of Internet innovation has come not from research but from the private sector, both large companies and start-ups. As networking researchers, we should be thinking about how best to design the network substrate to allow it to evolve, because all we know for sure is that it’s going to keep changing.

Powell: What kind of changes and challenges do you anticipate?

Lehr: We’re going to see many different kinds of networks. As the Internet pushes into the developing world, the emphasis will probably be on mobile networks. For now, the Internet community is still very U.S.-centric. Here, we have very strong First Amendment rights (see “The Five Worst Countries for Surfing the Web,” page 5), but that’s not always the case elsewhere in the world, so that’s something that could cause friction as access expands.

Powell: Nearly 200 million Americans have a broadband connection at home. The National Broadband Plan proposes that everyone here should have affordable broadband access by 2020. Is private industry prepared for this tremendous spike in traffic?

Connolly: Our stake in the ground is that global traffic will quadruple by 2014, and we believe 90 percent of consumer traffic will be video-based. The question is whether we can deal with all those bits at a cost that allows stakeholders to stay in business. The existing Internet is not really designed to handle high volumes of media. When we look at the growth rate of bandwidth, it has followed a consistent path, but you have to focus on technology at a cost. If we can’t hit a price target, it doesn’t go mainstream. When we hit the right price, all of a sudden people say, “I want to do that,” and away we go.

Powell: As networks connect to crucial systems—such as medical equipment, our homes, and the electrical grid—disruptions will become costly and even dangerous. How do we keep everything working reliably?

Lehr: We already use the cyber world to control the real world in our car engines and braking systems, but when we start using the Internet, distributed networks, and resources on some cloud to make decisions for us, that raises a lot of questions. One could imagine all kinds of scenarios. I might have an insulin pump that’s controlled over the Internet, and some guy halfway around the world can hack into it and change my drug dosage.

Feamster: The late Mark Weiser, chief technologist at the Xerox Palo Alto Research Center, said the most profound technologies are the ones that disappear. When we drive a car, we’re not even aware that there’s a huge network under the hood. We don’t have to know how it works to drive that car. But if we start networking appliances or medical devices and we want those networks to disappear in the same way, we need to rely on someone else to manage them for us, so privacy is a huge concern. How do I provide someone visibility and access so they can fix a problem without letting them see my personal files, or use my printer, or open my garage door? The issues that span usability and privacy are going to become increasingly important.

Zegura: I would not be willing to have surgery over the Internet today because it’s not secure or reliable enough. Many environments are even more challenging: disaster situations, remote areas, military settings. But many techniques have been developed to deal with places that lack robust communications infrastructure. For instance, my collaborators and I have been developing something called message ferries. These are mobile routers, nodes in the environment that enable communication. Message ferries could be on a bus, in a backpack, or on an airplane. Like a ferry picks up passengers, they pick up messages and deliver them to another region.

Powell: Any takers for surgery over the Internet? Show of hands?

Lehr: If I’m in the Congo and I need surgery immediately, and that’s the only way they can provide it to me, sure. Is it ready for prime time? Absolutely not.


Powell: Many Web sites now offer services based on “cloud computing.” What is the concept behind that?

Feamster: One of the central tenets of cloud computing is virtualization. What that means is that instead of having hardware that’s yours alone, you share it with other people, whom you might not trust. This is evident in Gmail and Google Docs. Your personal documents are sitting on the same machine with somebody else’s. In this kind of situation, it’s critical to be able to track where data go. Several of my students are working on this issue.

Powell: With more and more documents moving to the cloud, aren’t there some complications from never knowing exactly where your data are or what you’re connecting to?

Lehr: A disconnect between data and physical location puts providers in a difficult position—for example, Google deciding what to do with respect to filtering search results in China. It’s a global technology provider. It can potentially influence China’s rules, but how much should it try to do that? People are reexamining this issue at every level.

Powell: In one latest survey, 65 percent of adults in 14 countries reported that they had been the victim of some type of cyber crime. What do people need to know to protect themselves?

Feamster: How much do you rely on educating users versus shielding them from having to make sensitive decisions? In some instances you can prevent people from making mistakes or doing malicious things. Last year, for instance, Goldman Sachs was involved in a legal case in which the firm needed to show that no information had been exchanged between its trading and accounting departments. That’s the kind of thing that the network should just take care of automatically, so it can’t happen no matter what users do.

Zegura: I agree that in cases where it’s clear that there is something people should not do, and we can make it impossible to do it, that’s a good thing. But we can’t solve everything that way. There is an opportunity to help people understand more about what’s going on with networks so they can look out for themselves. A number of people don’t understand how you can get e-mail that looks like it came from your mother, even though it didn’t. The analogy is that someone can take an envelope and write your name on it, write your mother’s name on the return address, and stick it in your mailbox. Now you have a letter in your mailbox that looks like it came from your mother, but it didn’t. The same thing can happen with e-mail. It’s possible to write any address on an Internet packet so it looks like it came from somewhere else. That’s a very basic understanding that could help people be much smarter about how they use networks.

Audience: How is the Internet changing the way we learn?

Feamster: Google CEO Eric Schmidt once gave an interview in which he was talking about how kids are being quizzed on things like country capitals (video). He essentially said, “This is ridiculous. I can just go to Google and search for capitals. What we really should be teaching students is where to find answers.” That’s perhaps the viewpoint of someone who is trying to catalog all the world’s information and says, “Why don’t you use it?” But there’s something to be said for it—there’s a lot of data at our fingertips. Maybe education should shift to reflect that.

Audience: Do you think it will ever be possible to make the Internet totally secure?

Feamster: We’ll never have perfect security, but we can make it tougher. Take the problem of spam. You construct new spam filters, and then the spammers figure out that you’re looking for messages sent at a certain time or messages of a certain size, so they have to shuffle things up a bit. But the hope is that you’ve made it harder. It’s like putting up a higher fence around your house. You won’t stop problems completely, but you can make break-ins inconvenient or costly enough to mitigate them.

Audience: Should there be limits on how much personal information can be collected online?

Zegura: Most of my undergraduate students have a sensitivity to private information that’s very different from mine. But even if we’re savvy, we can still be unaware of the personal data that some companies collect. In general, it needs to be much easier for people to make informed choices.

Feamster: The thing that scares me the most is what happens when a company you thought you trusted gets bought or goes out of business and sells all of your data to the lowest bidder. There are too few regulations in place to protect us, even if we understand the current privacy policies.

Lehr: Technologically, Bill Joy [co-founder of Sun Microsystems] was right when he said, “Privacy is dead; just get over it.” Privacy today can no longer be about whether someone knows something, because we can’t regulate that effectively. What matters now is what they can do with what they know.

Audience: Wiring society creates the capacity to crash society. The banking system, utilities, and business administration are all vulnerable. How do we meaningfully weigh the benefits against the risks?


Lehr: How we decide to use networks is very important. For example, we might decide to have separate networks for certain systems. I cannot risk some kid turning on a generator in the Ukraine and blowing something up in Kentucky, so I might keep my electrical power grid network completely separate. This kind of question engages more than just technologists. A wider group of stakeholders needs to weigh in.

Connolly: You always have to balance the good versus the potential for evil. Occasionally big blackouts in the Northeast cause havoc, but if we decided not to have electricity because of that risk, that would be a bad decision, and I don’t think it’s any worse in the case of the Internet. We have to be careful, but there’s so much possibility for enormous good. The power of collaboration, with people working together through the Internet, gives us tremendous optimism for the kinds of issues we will be able to tackle.

The Conversation in Context: 12 Ideas That Will Reshape the Way We Live and Work Online

1. Change how the data flow
A good place to start is with the overburdened addressing system, known as IPv4. Every device connected to the Internet, including computers, smartphones, and servers, has a unique identifier, or Internet protocol (IP) address. “Whenever you type in the name of a Web site, the computer essentially looks at a phone book of IP addresses,” explains Craig Labovitz, chief scientist at Arbor Networks, a software and Internet company. “It needs a number to call to connect you.” Trouble is, IPv4 is running out of identifiers. In fact, the expanding Web is expected to outgrow IPv4’s 4.3 billion addresses within a couple of years. Anticipating this shortage, researchers began developing a new IP addressing system, known as IPv6, more than a decade ago. IPv6 is ready to roll, and the U.S. government and some big Internet companies, such as Google, have pledged to switch over by 2012. But not everyone is eager to follow. For one, the jump necessitates costly upgrades to hardware and software. Perhaps a bigger disincentive is the incompatibility of the two addressing systems, which means companies must support both versions throughout the transition to ensure that everyone will be able to access content. In the meantime, IPv4 addresses, which are typically free, may be bought and sold. For the average consumer, Labovitz says, that could translate to pricier Internet access.

2. Put the next internet to the test
In one GENI experiment, Stanford University researcher Kok-Kiong Yap is researching a futuristic Web that seamlessly transitions between various cellular and WiFi networks, allowing smartphones to look for an alternative connection whenever the current one gets overwhelmed. That’s music to the ears of everyone toting an iPhone.

3. Move data into the cloud
As Nick Feamster says, the cloud is an increasingly popular place to store data. So much so, in fact, that technology research company Gartner predicts the estimated value of the cloud market, including all software, advertising, and business transactions, will exceed $150 billion by 2013. Why the boom? Convenience. At its simplest, cloud computing is like a giant, low-cost, low-maintenance storage locker. Centralized servers, provided by large Internet companies like Microsoft, Google, and Amazon, plus scores of smaller ones worldwide, let people access data and applications over the Internet instead of storing them on personal hard drives. This reduces costs for software licensing and hardware.

4. Settle who owns the internet
While much of the data that zips around the Internet is free, the routers and pipes that enable this magical transmission are not. The question of who should pay for rising infrastructure costs, among other expenses, is at the heart of the long-standing net neutrality debate. On the one side, Internet service providers argue that charging Web sites more for bandwidth-hogging data such as video will allow them to expand capacity and deliver data faster and more reliably. Opponents counter that such a tiered or “pay as you go” Internet would unfairly favor wealthier content providers, allowing the richest players to indirectly censor their cash-strapped competition. So which side has the legal edge? Last December the Federal Communications Commission approved a compromise plan that would allow ISPs to prioritize traffic for a fee, but the FCC promises to police anticompetitive practices, such as an ISP’s mistreating, say, Netflix, if it wants to promote its own instant-streaming service. The extent of the FCC’s authority remains unclear, however, and the ruling could be challenged as early as this month.

5. Understand what can happen when networks make decisions for us
In November Iranian president Mahmoud Ahmadinejad confirmed that the Stuxnet computer worm had sabotaged national centrifuges used to enrich nuclear fuel. Experts have determined that the malicious code hunts for electrical components operating at particular frequencies and hijacks them, potentially causing them to spin centrifuges at wildly fluctuating rates. Labovitz of Arbor Networks says, “Stuxnet showed how skilled hackers can militarize technology.”

6. Get ready for virtual surgery
Surgeon Jacques Marescaux performed the first trans-Atlantic operation in 2001 when he sat in an office in New York and delicately removed the gall bladder of a woman in Strasbourg, France. Whenever he moved his hands, a robot more than 4,000 miles away received signals via a broadband Internet connection and, within 15-hundredths of a second, perfectly mimicked his movements. Since then more than 30 other patients have undergone surgery over the Internet. “The surgeon obviously needs a certain that the connection won’t be interrupted,” says surgeon Richard Satava of the University of Washington. “And you need a consistent time delay. You don’t want to see a robot continually change its response time to your hand motions.”

7. Bring on the message ferries
A message ferry is a mobile device or Internet node that could relay data in war zones, disaster sites, and other places lacking communications infrastructure.

8. Don’t share hardware with people whom you might not trust
Or who might not trust you. The tenuous nature of free speech on the Internet cropped up in December when Amazon Web Services booted WikiLeaks from its cloud servers. Amazon charged that the nonprofit violated its terms of service, although the U.S. government may have had more to do with the decision than Amazon admits. WikiLeaks, for its part, shot back on Twitter, “If Amazon are [sic] so uncomfortable with the First Amendment, they should get out of the business of selling books.”

Unfortunately for WikiLeaks, Amazon is not a government agency, so there is no First Amendment case against it, according to Internet scholar and lawyer Wendy Seltzer of Princeton University. You may be doing something perfectly legal on Amazon’s cloud, Seltzer explains, and Amazon could provide you the boot because of government pressure, protests, or even too many service calls. “Service providers provide end users very little recourse, if any,” she observes. That’s why people are starting to think about “distributed hosting,” in which no one company has total power, and thus no one company controls freedom of speech.

9. Make cloud computing secure Nick Feamster’s strategy is to tag sensitive information with irrevocable digital labels. For example, an employee who wants only his boss to read a message could create a label designating it as secret. That label would remain with the message as it passed through routers and servers to reach the recipient, preventing a snooping coworker from accessing it. “The file could be altered, chopped in two, whatever, and the label would remain with the data,” Feamster says. The label would also prohibit the boss from relaying the message to someone else. Feamster expects to unveil a version of his labeling system, called Pedigree, later this year.

10. Manage your junk mail A lot of it. Spam accounts for about 85 percent of all e-mail. That’s more than 50 billion junk messages a day, according to the online security company Symantec.

11. Privacy is dead? Don’t believe it As we cope with the cruel fact that the Internet never forgets, researchers are looking toward self-destructing data as a possible solution. Vanish, a program created at the University of Washington, encodes data with cryptographic tags that degrade over time like vanishing ink. A similar program, aptly called TigerText, allows users to program text messages with a “destroy by” date that activates once the message is opened. Another promising option, of course, is simply to exercise good judgment.

12. Network to make a better world Crowdsourcing science projects that harness the power of the wired masses have tremendous potential to quickly solve problems that would otherwise take years to resolve. Notable among these projects is Foldit (fold.it), an engaging online puzzle created by Seth Cooper of the University of Washington and others that tasks gamers with figuring out the shapes of hundreds of proteins, which in turn can lead to new medicines. Another is the UC Berkeley Space Sciences Lab’s Stardust@home project (stardustathome.ssl.berkeley.edu), which has recruited about 30,000 volunteers to scour, via the Internet, microscope images of interstellar dust particles collected from the tail of a comet that may hold clues to how the solar system formed. And Cornell University’s NestWatch (nestwatch.org) educates people about bird breeding and encourages them to submit nest records to an online database. To date, the program has collected nearly 400,000 nest records on more than 500 bird species.

Check out discovermagazine.com/web/
citizenscience for more projects.

—
Andrew Grant and Andrew Moseman

The Five Worst Countries for Surfing the Web

China

Government control of the Internet makes using the Web in China particularly limiting and sometimes dangerous. Chinese officials, for instance, imprisoned human rights activist Liu Xiaobo in 2009 for posting his views on the Internet and then blocked news Web sites that covered the Nobel Peace Prize ceremony honoring him last December. Want to experience China’s censorship firsthand? Go to baidu.com, the country’s most popular search engine, and type in “Tiananmen Square massacre.”

North Korea
It’s hard to surf the Web when there is no Web to surf. Very few North Koreans have access to the Internet; in fact, due to the country’s isolation and censorship, many of its citizens do not even know it exists.

Burma
Burma is the worst country in which to be a blogger, according to a 2009 report by the Committee to Protect Journalists. Blogger Maung Thura, popularly known in the country as Zarganar, was sentenced to 35 years in prison for posting content critical of the government’s aid efforts after a hurricane.

Iran

The Iranian government employs an extensive Web site filtering system, according to the press freedom group Reporters Without Borders, and limits Internet connection speeds to curb the sharing of photos and videos. Following the controversial 2009 reelection of president Mahmoud Ahmadinejad, protesters flocked to Twitter to voice their displeasure after the government blocked various news and social media Web sites.

Cuba

Only 14 percent of Cubans have access to the Internet, and the vast majority are limited to a government-controlled network made up of e-mail, an encyclopedia, government Web sites, and selected foreign sites supportive of the Cuban dictatorship. Last year Cuban officials accused the United States of encouraging subversion by allowing companies to offer Internet communication services there.

—
Andrew Grant

Wed, 06 Jul 2011 05:13:00 -0500 en text/html https://www.discovermagazine.com/technology/weaving-a-new-web
Killexams : AV-Comparatives Releases Long-Term Test of 18 Leading Endpoint Enterprise & Business Security Solutions / July 2022

How well is your company protected against cybercrime?

Independent, ISO-certified security testing lab AV-Comparatives published the July 2022 Enterprise Security Test Report - 18 IT Security solutions put to test

As businesses face increased levels of cyber threats, effective endpoint protection is more important than ever. A data breach can lead to bankruptcy!"— Peter Stelzhammer, co-founder, AV-Comparatives

INNSBRUCK, Austria, July 27, 2022 /CNW/ -- The business and enterprise test report contains the test results for March-June of 2022, including the Real-World Protection, Malware Protection, Performance (Speed Impact) and False-Positives Tests. Full details of test methodologies and results are provided in the report.

(PRNewsfoto/AV-Comparatives)

https://www.av-comparatives.org/tests/business-security-test-2022-march-june/

The threat landscape continues to evolve rapidly, presenting antivirus vendors with new challenges. The test report shows how security products have adapted to these and improved protection over the years.

To be certified in July 2022 as an 'Approved Business Product' by AV-Comparatives, the tested products must score at least 90% in the Malware Protection Test, with zero false alarms on common business software, a rate below 'Remarkably High' for false positives on non-business files and must score at least 90% in the overall Real-World Protection Test over the course of four months, with less than one hundred false alarms on clean software/websites.

Endpoint security solutions for enterprise and SMB from 18 leading vendors were put through the Business Main-Test Series 2022H1: Acronis, Avast, Bitdefender, Cisco, CrowdStrike, Cybereason, Elastic, ESET, G Data, K7, Kaspersky, Malwarebytes, Microsoft, Sophos, Trellix, VIPRE, VMware and WatchGuard.

Real-World Protection Test: The Real-World Protection Test is a long-term test run over a period of four months. It tests how ell the endpoint protection software can protect the system against Internet-borne threats.

Malware Protection Test:
The Malware Protection Test requires the tested products to detect malicious programs that could be encountered on the company systems, e.g. on the local area network or external drives.

Performance Test:
Performance Test checks that tested products do not provide protection at the expense of slowing down the system.

False Positives Test:
For each of the protection tests, a False Positives Test is run. These ensure that the endpoint protection software does not cause significant numbers of false alarms, which can be particularly disruptive in business networks.

Ease of Use Review:
The report also includes a detailed user-interface review of each product, providing an insight into what it is like to use in typical day-to-day management scenarios.

Overall, AV-Comparatives' July Business Security Test 2022 report provides IT managers and CISOs with a detailed picture of the strengths and weaknesses of the tested products, allowing them to make informed decisions on which ones might be appropriate for their specific needs.

The next awards will be given to qualifying December 2022H2 (for August-November tests). Like all AV-Comparatives' public test reports, the Enterprise & Business Endpoint Security Report is available universally and for free.

https://www.av-comparatives.org/tests/business-security-test-2022-march-june/

More Tests:
https://www.av-comparatives.org/news/anti-phishing-certification-test-2022/

About AV-Comparatives?

AV-Comparatives is an independent organisation offering systematic testing to examine the efficacy of security software products and mobile security solutions. Using one of the largest sample collection systems worldwide, it has created a real-world environment for truly accurate testing.?AV-Comparatives offers freely accessible av-test results to individuals, news organisations and scientific institutions. Certification by AV-Comparatives provides a globally recognised official seal of approval for software performance.??

Newsroom: http://www.einpresswire.com/newsroom/av-comparatives/

Contact: Peter Stelzhammer
e-mail: [email protected]
phone: +43 720115542 

Photo - https://mma.prnewswire.com/media/1867362/AVC_Business_Security_Test.jpg
Photo - https://mma.prnewswire.com/media/1867363/AVC_Approved_Business_Security.jpg
Logo - https://mma.prnewswire.com/media/1867361/AVC_Logo.jpg

AV-Comparatives Test Results – Enterprise Security (PRNewsfoto/AV-Comparatives)

AV-Comparatives Test Results – Enterprise Security (PRNewsfoto/AV-Comparatives)

Cision View original content to get multimedia:https://www.prnewswire.com/news-releases/av-comparatives-releases-long-term-test-of-18-leading-endpoint-enterprise--business-security-solutions--july-2022-301594367.html

SOURCE AV-Comparatives

[ Back To TMCnet.com's Homepage ]

Wed, 27 Jul 2022 03:07:00 -0500 text/html https://www.tmcnet.com/usubmit/-av-comparatives-releases-long-term-test-18-leading-/2022/07/27/9646038.htm
Killexams : Swan: Better Linux On Windows

If you are a Linux user that has to use Windows — or even a Windows user that needs some Linux support — Cygwin has long been a great tool for getting things done. It provides a nearly complete Linux toolset. It also provides almost the entire Linux API, so that anything it doesn’t supply can probably be built from source. You can even write code on Windows, compile and test it and (usually) port it over to Linux painlessly.

However, Cygwin’s package management is a little clunky and setting up the GUI environment has always been tricky, especially for new users. A project called Swan aims to make a full-featured X11 Linux environment easy to install on Windows.

The project uses Cygwin along with Xfce for its desktop. Cygwin provides pretty good Windows integration, but Swan also includes extra features. For example, you can make your default browser the Windows browser with a single click. It also includes spm — a package manager for Cygwin that is somewhat easier to use, although it still launches the default package manager to do the work (this isn’t a new idea, by the way).

Here’s a screenshot of Windows 10 (you can see Word running native in the background) with top running in a Bash shell and Thunar (the default file manager for Swan). Notice the panel at the top with the swan icon. You can add things there and there are numerous settings you can access from the swan icon.

Swan is fairly new, so it still has some rough edges, but we like where it is going. The install process is in two parts which doesn’t make sense for something trying to be easier. Admittedly, it is already easier than doing an X11 install with normal Cygwin. However, on at least one test install, the virus scanner erroneously tripped on the wget executable and that caused the install to fail.

The project is hosted on GitHub if you want to examine the source or contribute. Of course, Windows has its own support for Linux now (sort of). Swan isn’t quite a finished product and, like Cygwin, it isn’t a total replacement for Linux. But it is still worth a look on any machine that you use that boots Windows.

Wed, 03 Aug 2022 11:59:00 -0500 Al Williams en-US text/html https://hackaday.com/2017/03/29/swan-better-linux-on-windows/
Killexams : Weaving a New Web

In 1969 scientists at the University of California, Los Angeles, transmitted a couple of bits of data between two computers, and thus the Internet was born. Today about 2 billion people access the Web regularly, zipping untold exabytes of data (that’s 10^18 pieces of information) through copper and fiber lines around the world. In the United States alone, an estimated 70 percent of the population owns a networked computer. That number grows to 80 percent if you count smartphones, and more and more people jump online every day. But just how big can the information superhighway get before it starts to buckle? How much growth can the routers and pipes handle? The challenges seem daunting. The current Internet Protocol (IP) system that connects global networks has nearly exhausted its supply of 4.3 billion unique addresses. Video is projected to account for more than 90 percent of all Internet traffic by 2014, a sudden new demand that will require a major increase in bandwidth. Malicious software increasingly threatens national security. And consumers may face confusing new options as Internet service providers consider plans to create a “fast lane” that would prioritize some Web sites and traffic types while others are routed more slowly.

Fortunately, thousands of elite network researchers spend their days thinking about these thorny issues. Last September DISCOVER and the National Science Foundation convened four of them for a lively discussion, hosted by the Georgia Institute of Technology in Atlanta, on the next stage of Internet evolution and how it will transform our lives. DISCOVER editor in chief Corey S. Powell joined Cisco’s Paul Connolly, who works with Internet service providers (ISPs); Georgia Tech computer scientist Nick Feamster, who specializes in network security; William Lehr of MIT, who studies wireless technology, Internet architecture, and the economic and policy implications of online access; and Georgia Tech’s Ellen Zegura, an expert on mobile networking (click here for video of the event).

Powell: Few people anticipated Google’s swift rise, the vast influence of social media, or the Web’s impact on the music, television, and publishing industries. How do we even begin to map out what will come next?

Lehr: One thing the Internet has taught us thus far is that we can’t predict it. That’s wonderful because it allows for the possibility of constantly reinventing it.

Zegura: Our response to not being able to predict the Internet is to try to make it as flexible as possible. We don’t know for sure what will happen, so if we can create a platform that can accommodate many possible futures, we can position ourselves for whatever may come. The current Internet has held up quite well, but it is ready for some changes to prepare it to serve us for the next 30, 40, or 100 years. By building the ability to innovate into the network, we don’t have to know exactly what’s coming down the line. That said, Nick and others have been working on a test bed called GENI, the Global Environment for Network Innovations project that will allow us to experiment with alternative futures.

Powell: Almost like using focus groups to redesign the Internet?

Zegura: That’s not a bad analogy, although some of the testing might be more long-term than a traditional focus group.

Powell: What are some major online trends, and what do they suggest about where we are headed?

Feamster: We know that paths are getting shorter: From point A to point B, your traffic is going through fewer and fewer Internet service providers. And more and more data are moving into the cloud. Between now and 2020, the number of people on the Internet is expected to double. For those who will come online in the next 10 years or so, we don’t know how they’re going to access the Internet, how they’re going to use it, or what kinds of applications they might use. One trend is the proliferation of mobile devices: There could be more than a billion cell phones in India alone by 2015.

Powell: So there’s a whole universe of wireless connectivity that could potentially become an Internet universe?

Feamster: Absolutely. We know things are going to look vastly different from people sitting at desktops or laptops and browsing the Web. Also, a lot of Internet innovation has come not from research but from the private sector, both large companies and start-ups. As networking researchers, we should be thinking about how best to design the network substrate to allow it to evolve, because all we know for sure is that it’s going to keep changing.

Powell: What kind of changes and challenges do you anticipate?

Lehr: We’re going to see many different kinds of networks. As the Internet pushes into the developing world, the emphasis will probably be on mobile networks. For now, the Internet community is still very U.S.-centric. Here, we have very strong First Amendment rights (see “The Five Worst Countries for Surfing the Web,” page 5), but that’s not always the case elsewhere in the world, so that’s something that could cause friction as access expands.

Powell: Nearly 200 million Americans have a broadband connection at home. The National Broadband Plan proposes that everyone here should have affordable broadband access by 2020. Is private industry prepared for this tremendous spike in traffic?

Connolly: Our stake in the ground is that global traffic will quadruple by 2014, and we believe 90 percent of consumer traffic will be video-based. The question is whether we can deal with all those bits at a cost that allows stakeholders to stay in business. The existing Internet is not really designed to handle high volumes of media. When we look at the growth rate of bandwidth, it has followed a consistent path, but you have to focus on technology at a cost. If we can’t hit a price target, it doesn’t go mainstream. When we hit the right price, all of a sudden people say, “I want to do that,” and away we go.

Powell: As networks connect to crucial systems—such as medical equipment, our homes, and the electrical grid—disruptions will become costly and even dangerous. How do we keep everything working reliably?

Lehr: We already use the cyber world to control the real world in our car engines and braking systems, but when we start using the Internet, distributed networks, and resources on some cloud to make decisions for us, that raises a lot of questions. One could imagine all kinds of scenarios. I might have an insulin pump that’s controlled over the Internet, and some guy halfway around the world can hack into it and change my drug dosage.

Feamster: The late Mark Weiser, chief technologist at the Xerox Palo Alto Research Center, said the most profound technologies are the ones that disappear. When we drive a car, we’re not even aware that there’s a huge network under the hood. We don’t have to know how it works to drive that car. But if we start networking appliances or medical devices and we want those networks to disappear in the same way, we need to rely on someone else to manage them for us, so privacy is a huge concern. How do I provide someone visibility and access so they can fix a problem without letting them see my personal files, or use my printer, or open my garage door? The issues that span usability and privacy are going to become increasingly important.

Zegura: I would not be willing to have surgery over the Internet today because it’s not secure or reliable enough. Many environments are even more challenging: disaster situations, remote areas, military settings. But many techniques have been developed to deal with places that lack robust communications infrastructure. For instance, my collaborators and I have been developing something called message ferries. These are mobile routers, nodes in the environment that enable communication. Message ferries could be on a bus, in a backpack, or on an airplane. Like a ferry picks up passengers, they pick up messages and deliver them to another region.

Powell: Any takers for surgery over the Internet? Show of hands?

Lehr: If I’m in the Congo and I need surgery immediately, and that’s the only way they can provide it to me, sure. Is it ready for prime time? Absolutely not.


Powell: Many Web sites now offer services based on “cloud computing.” What is the concept behind that?

Feamster: One of the central tenets of cloud computing is virtualization. What that means is that instead of having hardware that’s yours alone, you share it with other people, whom you might not trust. This is evident in Gmail and Google Docs. Your personal documents are sitting on the same machine with somebody else’s. In this kind of situation, it’s critical to be able to track where data go. Several of my students are working on this issue.

Powell: With more and more documents moving to the cloud, aren’t there some complications from never knowing exactly where your data are or what you’re connecting to?

Lehr: A disconnect between data and physical location puts providers in a difficult position—for example, Google deciding what to do with respect to filtering search results in China. It’s a global technology provider. It can potentially influence China’s rules, but how much should it try to do that? People are reexamining this issue at every level.

Powell: In one latest survey, 65 percent of adults in 14 countries reported that they had been the victim of some type of cyber crime. What do people need to know to protect themselves?

Feamster: How much do you rely on educating users versus shielding them from having to make sensitive decisions? In some instances you can prevent people from making mistakes or doing malicious things. Last year, for instance, Goldman Sachs was involved in a legal case in which the firm needed to show that no information had been exchanged between its trading and accounting departments. That’s the kind of thing that the network should just take care of automatically, so it can’t happen no matter what users do.

Zegura: I agree that in cases where it’s clear that there is something people should not do, and we can make it impossible to do it, that’s a good thing. But we can’t solve everything that way. There is an opportunity to help people understand more about what’s going on with networks so they can look out for themselves. A number of people don’t understand how you can get e-mail that looks like it came from your mother, even though it didn’t. The analogy is that someone can take an envelope and write your name on it, write your mother’s name on the return address, and stick it in your mailbox. Now you have a letter in your mailbox that looks like it came from your mother, but it didn’t. The same thing can happen with e-mail. It’s possible to write any address on an Internet packet so it looks like it came from somewhere else. That’s a very basic understanding that could help people be much smarter about how they use networks.

Audience: How is the Internet changing the way we learn?

Feamster: Google CEO Eric Schmidt once gave an interview in which he was talking about how kids are being quizzed on things like country capitals (video). He essentially said, “This is ridiculous. I can just go to Google and search for capitals. What we really should be teaching students is where to find answers.” That’s perhaps the viewpoint of someone who is trying to catalog all the world’s information and says, “Why don’t you use it?” But there’s something to be said for it—there’s a lot of data at our fingertips. Maybe education should shift to reflect that.

Audience: Do you think it will ever be possible to make the Internet totally secure?

Feamster: We’ll never have perfect security, but we can make it tougher. Take the problem of spam. You construct new spam filters, and then the spammers figure out that you’re looking for messages sent at a certain time or messages of a certain size, so they have to shuffle things up a bit. But the hope is that you’ve made it harder. It’s like putting up a higher fence around your house. You won’t stop problems completely, but you can make break-ins inconvenient or costly enough to mitigate them.

Audience: Should there be limits on how much personal information can be collected online?

Zegura: Most of my undergraduate students have a sensitivity to private information that’s very different from mine. But even if we’re savvy, we can still be unaware of the personal data that some companies collect. In general, it needs to be much easier for people to make informed choices.

Feamster: The thing that scares me the most is what happens when a company you thought you trusted gets bought or goes out of business and sells all of your data to the lowest bidder. There are too few regulations in place to protect us, even if we understand the current privacy policies.

Lehr: Technologically, Bill Joy [co-founder of Sun Microsystems] was right when he said, “Privacy is dead; just get over it.” Privacy today can no longer be about whether someone knows something, because we can’t regulate that effectively. What matters now is what they can do with what they know.

Audience: Wiring society creates the capacity to crash society. The banking system, utilities, and business administration are all vulnerable. How do we meaningfully weigh the benefits against the risks?


Lehr: How we decide to use networks is very important. For example, we might decide to have separate networks for certain systems. I cannot risk some kid turning on a generator in the Ukraine and blowing something up in Kentucky, so I might keep my electrical power grid network completely separate. This kind of question engages more than just technologists. A wider group of stakeholders needs to weigh in.

Connolly: You always have to balance the good versus the potential for evil. Occasionally big blackouts in the Northeast cause havoc, but if we decided not to have electricity because of that risk, that would be a bad decision, and I don’t think it’s any worse in the case of the Internet. We have to be careful, but there’s so much possibility for enormous good. The power of collaboration, with people working together through the Internet, gives us tremendous optimism for the kinds of issues we will be able to tackle.

The Conversation in Context: 12 Ideas That Will Reshape the Way We Live and Work Online

1. Change how the data flow
A good place to start is with the overburdened addressing system, known as IPv4. Every device connected to the Internet, including computers, smartphones, and servers, has a unique identifier, or Internet protocol (IP) address. “Whenever you type in the name of a Web site, the computer essentially looks at a phone book of IP addresses,” explains Craig Labovitz, chief scientist at Arbor Networks, a software and Internet company. “It needs a number to call to connect you.” Trouble is, IPv4 is running out of identifiers. In fact, the expanding Web is expected to outgrow IPv4’s 4.3 billion addresses within a couple of years. Anticipating this shortage, researchers began developing a new IP addressing system, known as IPv6, more than a decade ago. IPv6 is ready to roll, and the U.S. government and some big Internet companies, such as Google, have pledged to switch over by 2012. But not everyone is eager to follow. For one, the jump necessitates costly upgrades to hardware and software. Perhaps a bigger disincentive is the incompatibility of the two addressing systems, which means companies must support both versions throughout the transition to ensure that everyone will be able to access content. In the meantime, IPv4 addresses, which are typically free, may be bought and sold. For the average consumer, Labovitz says, that could translate to pricier Internet access.

2. Put the next internet to the test
In one GENI experiment, Stanford University researcher Kok-Kiong Yap is researching a futuristic Web that seamlessly transitions between various cellular and WiFi networks, allowing smartphones to look for an alternative connection whenever the current one gets overwhelmed. That’s music to the ears of everyone toting an iPhone.

3. Move data into the cloud
As Nick Feamster says, the cloud is an increasingly popular place to store data. So much so, in fact, that technology research company Gartner predicts the estimated value of the cloud market, including all software, advertising, and business transactions, will exceed $150 billion by 2013. Why the boom? Convenience. At its simplest, cloud computing is like a giant, low-cost, low-maintenance storage locker. Centralized servers, provided by large Internet companies like Microsoft, Google, and Amazon, plus scores of smaller ones worldwide, let people access data and applications over the Internet instead of storing them on personal hard drives. This reduces costs for software licensing and hardware.

4. Settle who owns the internet
While much of the data that zips around the Internet is free, the routers and pipes that enable this magical transmission are not. The question of who should pay for rising infrastructure costs, among other expenses, is at the heart of the long-standing net neutrality debate. On the one side, Internet service providers argue that charging Web sites more for bandwidth-hogging data such as video will allow them to expand capacity and deliver data faster and more reliably. Opponents counter that such a tiered or “pay as you go” Internet would unfairly favor wealthier content providers, allowing the richest players to indirectly censor their cash-strapped competition. So which side has the legal edge? Last December the Federal Communications Commission approved a compromise plan that would allow ISPs to prioritize traffic for a fee, but the FCC promises to police anticompetitive practices, such as an ISP’s mistreating, say, Netflix, if it wants to promote its own instant-streaming service. The extent of the FCC’s authority remains unclear, however, and the ruling could be challenged as early as this month.

5. Understand what can happen when networks make decisions for us
In November Iranian president Mahmoud Ahmadinejad confirmed that the Stuxnet computer worm had sabotaged national centrifuges used to enrich nuclear fuel. Experts have determined that the malicious code hunts for electrical components operating at particular frequencies and hijacks them, potentially causing them to spin centrifuges at wildly fluctuating rates. Labovitz of Arbor Networks says, “Stuxnet showed how skilled hackers can militarize technology.”

6. Get ready for virtual surgery
Surgeon Jacques Marescaux performed the first trans-Atlantic operation in 2001 when he sat in an office in New York and delicately removed the gall bladder of a woman in Strasbourg, France. Whenever he moved his hands, a robot more than 4,000 miles away received signals via a broadband Internet connection and, within 15-hundredths of a second, perfectly mimicked his movements. Since then more than 30 other patients have undergone surgery over the Internet. “The surgeon obviously needs a certain that the connection won’t be interrupted,” says surgeon Richard Satava of the University of Washington. “And you need a consistent time delay. You don’t want to see a robot continually change its response time to your hand motions.”

7. Bring on the message ferries
A message ferry is a mobile device or Internet node that could relay data in war zones, disaster sites, and other places lacking communications infrastructure.

8. Don’t share hardware with people whom you might not trust
Or who might not trust you. The tenuous nature of free speech on the Internet cropped up in December when Amazon Web Services booted WikiLeaks from its cloud servers. Amazon charged that the nonprofit violated its terms of service, although the U.S. government may have had more to do with the decision than Amazon admits. WikiLeaks, for its part, shot back on Twitter, “If Amazon are [sic] so uncomfortable with the First Amendment, they should get out of the business of selling books.”

Unfortunately for WikiLeaks, Amazon is not a government agency, so there is no First Amendment case against it, according to Internet scholar and lawyer Wendy Seltzer of Princeton University. You may be doing something perfectly legal on Amazon’s cloud, Seltzer explains, and Amazon could provide you the boot because of government pressure, protests, or even too many service calls. “Service providers provide end users very little recourse, if any,” she observes. That’s why people are starting to think about “distributed hosting,” in which no one company has total power, and thus no one company controls freedom of speech.

9. Make cloud computing secure Nick Feamster’s strategy is to tag sensitive information with irrevocable digital labels. For example, an employee who wants only his boss to read a message could create a label designating it as secret. That label would remain with the message as it passed through routers and servers to reach the recipient, preventing a snooping coworker from accessing it. “The file could be altered, chopped in two, whatever, and the label would remain with the data,” Feamster says. The label would also prohibit the boss from relaying the message to someone else. Feamster expects to unveil a version of his labeling system, called Pedigree, later this year.

10. Manage your junk mail A lot of it. Spam accounts for about 85 percent of all e-mail. That’s more than 50 billion junk messages a day, according to the online security company Symantec.

11. Privacy is dead? Don’t believe it As we cope with the cruel fact that the Internet never forgets, researchers are looking toward self-destructing data as a possible solution. Vanish, a program created at the University of Washington, encodes data with cryptographic tags that degrade over time like vanishing ink. A similar program, aptly called TigerText, allows users to program text messages with a “destroy by” date that activates once the message is opened. Another promising option, of course, is simply to exercise good judgment.

12. Network to make a better world Crowdsourcing science projects that harness the power of the wired masses have tremendous potential to quickly solve problems that would otherwise take years to resolve. Notable among these projects is Foldit (fold.it), an engaging online puzzle created by Seth Cooper of the University of Washington and others that tasks gamers with figuring out the shapes of hundreds of proteins, which in turn can lead to new medicines. Another is the UC Berkeley Space Sciences Lab’s Stardust@home project (stardustathome.ssl.berkeley.edu), which has recruited about 30,000 volunteers to scour, via the Internet, microscope images of interstellar dust particles collected from the tail of a comet that may hold clues to how the solar system formed. And Cornell University’s NestWatch (nestwatch.org) educates people about bird breeding and encourages them to submit nest records to an online database. To date, the program has collected nearly 400,000 nest records on more than 500 bird species.

Check out discovermagazine.com/web/
citizenscience for more projects.

—
Andrew Grant and Andrew Moseman

The Five Worst Countries for Surfing the Web

China

Government control of the Internet makes using the Web in China particularly limiting and sometimes dangerous. Chinese officials, for instance, imprisoned human rights activist Liu Xiaobo in 2009 for posting his views on the Internet and then blocked news Web sites that covered the Nobel Peace Prize ceremony honoring him last December. Want to experience China’s censorship firsthand? Go to baidu.com, the country’s most popular search engine, and type in “Tiananmen Square massacre.”

North Korea
It’s hard to surf the Web when there is no Web to surf. Very few North Koreans have access to the Internet; in fact, due to the country’s isolation and censorship, many of its citizens do not even know it exists.

Burma
Burma is the worst country in which to be a blogger, according to a 2009 report by the Committee to Protect Journalists. Blogger Maung Thura, popularly known in the country as Zarganar, was sentenced to 35 years in prison for posting content critical of the government’s aid efforts after a hurricane.

Iran

The Iranian government employs an extensive Web site filtering system, according to the press freedom group Reporters Without Borders, and limits Internet connection speeds to curb the sharing of photos and videos. Following the controversial 2009 reelection of president Mahmoud Ahmadinejad, protesters flocked to Twitter to voice their displeasure after the government blocked various news and social media Web sites.

Cuba

Only 14 percent of Cubans have access to the Internet, and the vast majority are limited to a government-controlled network made up of e-mail, an encyclopedia, government Web sites, and selected foreign sites supportive of the Cuban dictatorship. Last year Cuban officials accused the United States of encouraging subversion by allowing companies to offer Internet communication services there.

—
Andrew Grant

Sat, 07 Dec 2019 21:35:00 -0600 en text/html https://www.discovermagazine.com/technology/weaving-a-new-web?&b_start:int=4
250-511 exam dump and training guide direct download
Training Exams List