Completely free 250-370 exam braindumps are provided by

If you will not get your exam pass by studying just 250-370 course books and eBooks, Visit and download 250-370 free pdf. You can download 100% free practice test to evaluate before you purchase full variety. This will demonstrate your best decision toward success. Just memorize the 250-370 practice test, practice with VCE exam simulator and the work is done.

Exam Code: 250-370 Practice test 2022 by team
Administration of Symantec NetBackup 7.0 for Windows
Symantec Administration test
Killexams : Symantec Administration test - BingNews Search results Killexams : Symantec Administration test - BingNews Killexams : Norton Power Eraser

Norton Power Eraser

Eliminates deeply embedded and difficult to remove crimeware that traditional virus scanning doesn't always detect.

Norton Power Eraser is a powerful free removal tool that may help you clean up certain types of difficult to remove security risks. If a program has hijacked your computer and you are having difficulty detecting or removing it, Norton Power Eraser may be able to clean your computer. Norton Power Eraser includes detection and removal capabilities for security risks that impersonate legitimate applications (for example, fake antivirus software), often known as "scareware", "rogueware" or "scamware". You can run this tool to scan for threats even if you have a Symantec product, or any other security product. If you cannot start the computer in Normal mode, you can run this tool in Safe mode.

Norton Power Eraser is easy to download, and scans your computer quickly to detect the most aggressive computer viruses. You don't need to install this tool.

Because the Norton Power Eraser uses aggressive methods to detect these threats, there is a risk that it can select some legitimate programs for removal. You should use this tool very carefully, and only after you have exhausted other options.

Download: Norton Power Eraser | 9.1 MB (Freeware)
Links: Norton Power Eraser Homepage | Tutorial

Get alerted to all of our Software updates on Twitter at @NeowinSoftware

Fri, 01 Jul 2022 05:00:00 -0500 Razvan Serea en text/html
Killexams : AV-Comparatives Releases Long-Term Test of 18 Leading Endpoint Enterprise & Business Security Solutions / July 2022

How well is your company protected against cybercrime?

Independent, ISO-certified security testing lab AV-Comparatives published the July 2022 Enterprise Security Test Report - 18 IT Security solutions put to test

As businesses face increased levels of cyber threats, effective endpoint protection is more important than ever. A data breach can lead to bankruptcy!"— Peter Stelzhammer, co-founder, AV-Comparatives

INNSBRUCK, Austria, July 27, 2022 /PRNewswire/ -- The business and enterprise test report contains the test results for March-June of 2022, including the Real-World Protection, Malware Protection, Performance (Speed Impact) and False-Positives Tests. Full details of test methodologies and results are provided in the report.


The threat landscape continues to evolve rapidly, presenting antivirus vendors with new challenges. The test report shows how security products have adapted to these and improved protection over the years.

To be certified in July 2022 as an 'Approved Business Product' by AV-Comparatives, the tested products must score at least 90% in the Malware Protection Test, with zero false alarms on common business software, a rate below 'Remarkably High' for false positives on non-business files and must score at least 90% in the overall Real-World Protection Test over the course of four months, with less than one hundred false alarms on clean software/websites.

Endpoint security solutions for enterprise and SMB from 18 leading vendors were put through the Business Main-Test Series 2022H1: Acronis, Avast, Bitdefender, Cisco, CrowdStrike, Cybereason, Elastic, ESET, G Data, K7, Kaspersky, Malwarebytes, Microsoft, Sophos, Trellix, VIPRE, VMware and WatchGuard.

Real-World Protection Test: The Real-World Protection Test is a long-term test run over a period of four months. It tests how well the endpoint protection software can protect the system against Internet-borne threats.

Malware Protection Test:
The Malware Protection Test requires the tested products to detect malicious programs that could be encountered on the company systems, e.g. on the local area network or external drives.

Performance Test:
Performance Test checks that tested products do not provide protection at the expense of slowing down the system.

False Positives Test:
For each of the protection tests, a False Positives Test is run. These ensure that the endpoint protection software does not cause significant numbers of false alarms, which can be particularly disruptive in business networks.

Ease of Use Review:
The report also includes a detailed user-interface review of each product, providing an insight into what it is like to use in typical day-to-day management scenarios.

Overall, AV-Comparatives' July Business Security Test 2022 report provides IT managers and CISOs with a detailed picture of the strengths and weaknesses of the tested products, allowing them to make informed decisions on which ones might be appropriate for their specific needs.

The next awards will be given to qualifying December 2022H2 (for August-November tests). Like all AV-Comparatives' public test reports, the Enterprise & Business Endpoint Security Report is available universally and for free.

More Tests:

About AV-Comparatives 

AV-Comparatives is an independent organisation offering systematic testing to examine the efficacy of security software products and mobile security solutions. Using one of the largest trial collection systems worldwide, it has created a real-world environment for truly accurate testing. AV-Comparatives offers freely accessible av-test results to individuals, news organisations and scientific institutions. Certification by AV-Comparatives provides a globally recognised official seal of approval for software performance.  


Contact: Peter Stelzhammer
phone: +43 720115542 

Photo -
Photo -
Logo -

AV-Comparatives Test Results – Enterprise Security (PRNewsfoto/AV-Comparatives)

AV-Comparatives Test Results – Enterprise Security (PRNewsfoto/AV-Comparatives)

Cision View original content to get multimedia:

SOURCE AV-Comparatives

Wed, 27 Jul 2022 01:31:00 -0500 en text/html
Killexams : RAM Scraping Attack

What Is a RAM Scraping Attack?

A RAM scraping attack is an intrusion into the random access memory (RAM) of a retail sales terminal in order to steal consumer credit card information. This type of cybercrime has plagued retailers and their customers since at least 2008.

RAM scraping is also called a point-of-sale (POS) attack because the target is a terminal used to process retail transactions.

Understanding a RAM Scraping Attack

The first known RAM scraping attack was reported in an alert issued by the credit card company Visa Inc. in October 2008. The company's security team discovered that point-of-sale (POS) terminals used to process customer transactions using its cards had been accessed by hackers. The hackers had been able to obtain unencrypted customer information from the RAM in the terminals.

Key Takeaways

  • A RAM scraping attack targets credit card transaction information stored temporarily in the point-of-sale terminal.
  • It is only one type of malware used to steal consumer information.
  • The notorious Home Depot and Target attacks used RAM scraping malware.
  • RAM scraping is thwarted by newer credit cards that use an embedded chip rather than a magnetic stripe.

The targets of the earliest attacks were mostly in the hospitality and retail industries, which process high volumes of credit card transactions at a large number of locations. By 2011, investigators were tracking an uptick in the introduction of malware bugs.

Notorious POS Attacks

S attacks did not gain widespread attention until 2013 and 2014 when hackers infiltrated the networks of the Target and Home Depot retail chains. The personal information of more than 40 million Target customers and 56 million Home Depot customers was stolen in those attacks, which were attributed to the use of a new spyware program known as BlackPOS.

The attacks continue, although RAM scrapers are now being replaced with more advanced types of malware such as screen grabbers and keystroke loggers. These are exactly what they sound like. They are malware programs designed to capture personal information when it is displayed or as it is entered and then transmit it to a third party.

How RAM Scrapers Work

The plastic credit cards that we all carry contain two distinct sets of information.

  • The first set is embedded in the magnetic stripe and is invisible to the human eye. That stripe contains two tracks of information. The first track contains an alphanumeric sequence based on a standard developed by the International Air Transport Association (IATA). This sequence contains the account number, cardholder’s name, expiration date, and more in a sequence recognizable by any POS machine. The second track uses a shorter but analogous sequence developed by the American Bankers Association (ABA). There is a third track but it is little used.
  • The second piece of information is visible. It's the three- or four-digit code known as the card verification number (CVN) or card security code (CSC). This number adds an extra layer of security if it is not included in the electronic data contained in the magnetic stripe.

Screen grabbers and keystroke loggers are newer ways to steal credit card data.

The POS terminal collects all of the data in that first set, and sometimes the second code as well. The data is then held in the memory of that POS machine until it is periodically purged.

When Data Is Vulnerable

As long as it is in temporary storage on the terminal, that information is vulnerable to RAM scrapers.

Small merchants are a relatively easy target for cybercriminals since they can't devote a lot of resources to elaborate security systems. Larger retailers like Target and Home Depot are far more attractive because of the massive amounts of data they retain at any given time.

Avoiding RAM Scraping

Thwarting RAM scraping is mostly the job of the retailer, not the consumer. Luckily, a good deal of progress has been made since the infamous attacks on Home Depot and Target.

Your credit card issuers have by now almost certainly sent you a new card that is inserted into a retailer's card reader rather than swiped along the side of it. The reader uses the chip embedded in the card rather than the older magnetic stripe. The purpose of this technology is to make a POS attack more difficult.

Contactless payment by credit card is considered as safe as "dipping" a card. These are not yet universally accepted by retailers (or enabled by card issuers) but are increasingly an option.

It took a long while for this switch to be fully put in place nationwide because it required every retailer who used the new system to buy new equipment in order to enable it. If you run across a retailer who still uses the old swipe readers, you might consider paying cash instead.

Tue, 19 Jul 2022 01:49:00 -0500 en text/html
Killexams : NetBackup™ Commands Reference Guide
-backupid value

The ID of the backup image to use to create the parameters file to restore a VMware virtual machine disk or disks, in clientname_backuptime format. The backuptime is the decimal number of seconds since January 1, 1970.

Use this option with the -restorespecout option. Do not combine it with the -s or -e option.


Contacts the BMR server to carry out tasks related to virtual machine creation from client backup.

-C vm_client

The name of the virtual machine as identified in the backup. For example, if the policy backed up the virtual machine by its host name, specify that host name.

To restore to a different location, use the -vmserver and -R options.

-config bmr_config_name

Specifies the BMR configuration name. The default name is current. Applies only to the BMR VM conversion.

-copy copy_number

Specifies the copy number to restore from for a vSphere Restore operation. This option allows a restore from a copy other than the primary copy. For example, -copy 3 restores copy 3 of the backup image.

This option is only supported for a full backup of a VMware virtual machine. If the specified copy number does not exist, the primary copy is used.

-disk_media_server media_server

Specifies which media server performs the Instant Recovery.

This option is useful if NetBackup storage is configured over several media servers, such as for load balancing. Without the -disk_media_server option, the Instant Recovery job may select any of the available media servers to do the restore. If only one of the media servers is configured for Instant Recovery, specify that server with the -disk_media_server option.


Suppresses the confirmation prompts.


Starts the Instant Recovery of the specified virtual machine. For VMware, the command mounts the backup image as an NFS datastore. The virtual machine is instantly recovered when the virtual machine data is accessible on the VM host.

-ir_deactivate ir_identifier [-force]

Deletes the specified restored virtual machine from the ESX host and releases the NetBackup media server resources. The -force option suppresses the confirmation prompts.

-ir_done ir_identifier

Completes the virtual machine instant recovery job after the data is migrated. It removes the NetBackup storage and releases the media server resources. The NetBackup storage is the datastore that is mounted on the ESX host.


Lists details about the virtual machines that are activated by instant recovery.

-ir_reactivate ir_identifier [-force]

Reactivates a restored virtual machine by remounting the NetBackup NFS datastore. It also registers the restored virtual machines on the ESX host from the temporary datastore on the ESX host.

ir_identifier is the virtual machine's numeric identifier from the -ir_listvm output.

The -force option suppresses the confirmation prompts.


Restarts an interrupted instant recovery job for all virtual machines on the ESX host and NetBackup media server combination.

-L progress_log

Specifies the name of an existing file in which to write progress information. This option applies to vSphere restore and Hyper-V restore.

Only default paths are allowed for this option and Veritas recommends to use the default paths. If you cannot use the NetBackup default path in your setup, you should add custom paths to the NetBackup configuration. The following are the default paths:

UNIX systems: /usr/openv/netbackup/logs/user_ops/proglog

Windows systems: install_path\NetBackup\logs\user_ops\proglog

For more information on how to add a custom path, see the "BPCD_ALLOWED_PATH option for NetBackup servers and clients" course in the NetBackup Administrator's Guide, Volume I.

-media_server media_server_activate_vm

Specifies the media server on which the NFS datastores that contain the backup images were mounted when you reactivate virtual machines. This option is used only with the -ir_reactivate_all function.


Overwrites the VMs and associated resources if they already exist with the same name. The resources are entities such as virtual machine disk format files (VMDKs) that explicitly belong to the existing VM. If -O is specified, the VMware server is requested to remove the VM before the VM is restored. If not specified, the restore may fail. This option is used with the VClient restore, the Hyper-V restore, and the BMR VM conversion.

-R rename_file

Specifies an absolute directory path to a rename file, which is used to restore a VMware virtual machine. The rename file indicates that the restore is to be redirected to an alternate location and specifies details about the alternate client location. For VMware, the rename file can include any of the following entries:

change /first_vmdk_path to /new_first_vmdk_path 
change /second_vmdk_path to /new_second_vmdk_path 
change /n'th_vmdk_path to /new_nth_vmdk_path 
change vmname to NEW_VM_NAME 
change esxhost to NEW_ESX_HOST 
change datacenter to NEW_DATACENTER 
change folder to NEW_FOLDER 
change resourcepool to NEW_RESOURCEPOOL 
change datastore to NEW_DATASTORE 
change network to NEW_NETWORK
change organization to NEW_ORGANIZATION
change orgvdc to NEW_ORGVDC
change vcdserver to NEW_VCDSERVER
change vcdvapp to NEW_VCDVAPP
change vcdvapptemplate to NEW_VCDVAPPTEMPLATE
change vcdvmname to NEW_VCDVMNAME
change vcdcatalog to NEW_VCDCATALOG

Instant Recovery uses the following subset of this list:

change vmname to NEW_VM_NAME
change esxhost to NEW_ESX_HOST
change resourcepool to NEW_RESOURCEPOOL
change network to NEW_NETWORK

The following are notes regarding these entries:

  • Enter the change line exactly as it appears in this list, except for the variable at the end (shown in all caps).

  • Each change line must end with a carriage return. If the rename_file contains only one entry, make sure that the end of the line contains a carriage return.

  • If the rename file has no contents, the restore uses default values from the backup image.

  • Use change datastore to NEW_DATASTORE to identify the target datastore when you restore from backups that are not made with Replication Director.

  • The rename file must be in UTF-8 character encoding.

With NetBackup 7.7.2 and later, only default paths are allowed for this option and Veritas recommends to use the default paths. If you cannot use the NetBackup default path in your setup, you should add custom paths to the NetBackup configuration.

For more information on how to add a custom path, see the "BPCD_ALLOWED_PATH option for NetBackup servers and clients" course in the NetBackup Administrator's Guide, Volume I.

-restorespec filename

Creates a new virtual machine and restores the NetBackup client and disks that you specify in the filename to the new VM. A special case that is called In-place Disk Restore replaces all disks of an existing VM with the data in its backup. RDM and independent disks are not replaced or deleted. For In-place Disk Restore, the disks are restored to the same disk controller configuration that is acquired at the time of backup. The filename is a text file that uses the JavaScript Object Notation (JSON) format.

The text file must be in UTF-8 character encoding.

You can use the -restorespecout option to create the JSON-formatted text file. You can edit the text file so that it contains only the virtual machine disks that you want to restore.

The following is an example of the restore parameters that the -restorespec option requires:

  "ClientType": "VMware",
  "ClientName": "VM-client-name",
  "RestoreType": "SelectiveDiskRestore",
  "BackupImageSelection": {
    "MasterServer": "Master-server-name",
    "StartDate": "mm/dd/yy hh:mm:ss",
    "EndDate": "mm/dd/yy hh:mm:ss",
    "BackupId": "clientname_timestamp"
  "VMwareRestoreParameters": {
    "vCenterServer": "vCenter-name-for-restore",
    "VMwareRecoveryHost": "Recovery-host-name",
    "DefaultDiskProvisioning": "thin",
    "TransportMode": "san:hotadd:nbd",
    "VMwareVirtualMachineDestination": {
      "VMName": "Restore-vm-name",
      "AttachDisksToExistingVM": "No",
      "PowerOn": "No",
      "Datacenter": "Path-of-Datacenter-for-destination-vm",
      "ESX": "Hostname-of-the-ESX-host",
      "Folder": "Path-to-destination-VM-folder",
      "ResourcePool/Vapp": "Path-of-vApp-or-resource-pool-destination",
      "VmxDatastore": ""
    "VMwareVirtualDiskDestination": [
         "VirtualDisk" : "/DS1/BackedupVM/BackedupVM.vmdk",
         "OverwriteExistingDisk": "No",
         "Datastore": "[Datastore-name]",
         "Path": "",
         "Provisioning": "thin"
         "Controller": "scsi0-0"           },
         "VirtualDisk": "/DS2/BackedupVM/BackedupVM_1.vmdk",
         "OverwriteExistingDisk": "No",
         "Datastore": "",
         "Path": "[datastore_name] MyVm/MyVM_1.vmdk",
         "Provisioning": "eagerzeroed"
         "Controller": "scsi0-1"           }
    "VMwareAdvancedRestoreOptions": {
      "DeleteRestoredVMOnError": "No",
      "VMShutdownWaitSeconds": 900

The following is an example of the restore parameters that the -restorespec option requires for In-place Disk Restore:

  "BackupImageSelection": {
    "StartDate": "05/03/20 21:50:34",
    "BackupId": "",
    "EndDate": "05/03/20 21:50:34",
    "MasterServer": "bptms-lnr73-0029"
  "ClientName": "",
  "VMwareRestoreParameters": {
    "vmdk_compression": "none",
    "VMwareAdvancedRestoreOptions": {
      "VMShutdownWaitSeconds": 900,
      "DeleteRestoredVMOnError": "No"
    "VMwareRecoveryHost": "bptms-lnr73-0029",
    "VMwareVirtualMachineDestination": {
      "ResourcePool/Vapp": "/New Datacenter/host/Test01/Resources",
      "VmxDatastore": "datastore1",
      "Datacenter": "/New Datacenter",
      "AttachDisksToExistingVM": "DeleteAllDisksAndReplace",
      "ESX": "",
      "VMName": "bptesx60l-19vm1",
      "Folder": "/New Datacenter/vm/",
      "PowerOn": "Yes"
    "DefaultDiskProvisioning": "unknown",
    "TransportMode": "nbdssl",
    "VMwareVirtualDiskDestination": [],
    "vCenterServer": "bptesx60l-19vc"
  "ClientType": "VMware",
  "RestoreType": "SelectiveDiskRestore"

The following itemized lists describe the five sections of the filename. The optional sections or optional fields that you do not want to use must be omitted from the filename.

First section (required): The opening section of the filename provides the required information about the client that contains the disks that you want to restore.

  • ClientType. The client type as configured in the backup policy. Required.

    For Vmware virtual machine disk restore, use VMware

  • ClientName. The client name as configured in the backup policy. Required.

  • RestoreType. The type of restore. Required.

    For Vmware virtual machine disk restore, use SelectiveDiskRestore.

Second section (optional): The BackupImageSelection section of the filename specifies the information required to identify the backup image to restore. If this section is not specified, NetBackup restores from the most exact backup. The following are the fields that describe the BackupImageSelection:

  • MasterServer. The fully-qualified domain name of the NetBackup master server to use to query the VM details. Optional.

    If not specified, the master server that is specified in the NetBackup configuration is used.

  • StartDate. The start date to look for backup images, in mm/dd/yy hh:mm:ss format. If more than one backup image exits in the date range, NetBackup selects the most exact backup. Optional.

    If not specified, the start date is 6 months earlier than the current date.

  • EndDate. The end date to look for backup images, in mm/dd/yy hh:mm:ss format. If more than one backup image exits in the date range, NetBackup selects the most exact backup. Optional.

    If not specified, NetBackup uses the current date.

  • BackupId. The ID of the backup image to use for the restore, in clientname_backuptime format. The backuptime is the decimal number of seconds since January 1, 1970. Optional.

    If not specified, NetBackup uses the most exact backup image. If you specify a StartDate, EndDate, and a valid BackupId, NetBackup restores from the BackupId image.

Third section (required): The VMwareRestoreParameters section of the filename specifies the VMware attributes of the virtual disk to be restored. All of the fields in this section are optional; however, the section is required because it also contains two required subsections. The following are the fields that describe the VMwareRestoreParameters:

  • vCenterServer. The host name of the destination vCenter for the restore, in the same format as specified in the credentials. Optional.

    To restore to a standalone ESXi hypervisor when the backup was through a vCenter, the value of this field must be None.

  • VMwareRecoveryHost. The host that performs the restore. Optional.

    If not specified, NetBackup uses the backup host value from the backup image.

  • DefaultDiskProvisioning. The default disk provisioning for all of the disks to be created in the restore VM: thin, thick, eagerzeroed, or unknown. Optional.

    For each disk, you can override this default by specifying a different Provisioning value in the VMwareVirtualDiskDestination section of the filename.

    If neither DefaultDiskProvisioning or Provisioning are specified, NetBackup uses the provisioning as specified in the backup.

  • TransportMode. The transport mode combination to use for the restore as specified in lowercase, colon separated values: hotadd:nbd:nbdssl:san. The order of the specification is significant; NetBackup attempts each method in turn until the restore succeeds. If all methods fail, the restore fails. Optional.

    If not specified, NetBackup uses the transport mode combination that was used for the backup.

Fourth section (required): The VMwareVirtualMachineDestination section of the filename specifies the destination parameters for the restore. This section is subordinate to the VMwareRestoreParameters section. It contains the following fields:

  • VMName. The unique display name of the new virtual machine for the restored disk or disks. The nbrestorevm command adds a timestamp to the name of the original VM client when it populates this field. The timestamp is the decimal number of seconds since January 1, 1970. Required.

    NetBackup restores the virtual machine disks to a new VM. Therefore, if this name conflicts with an existing display name, the restore fails.

  • AttachDisksToExistingVM. Determines whether to restore the selected VMDKs to: an existing VM, a new VM, or replace all the VMDKs on an existing VM as follows:

    • If the value is Yes, the VM specified in the VMName field must exist in the target vCenter or ESX server. If it does not exist, the restore fails with status code 2820.

    • If the value is No, the VM specified in the VMName field must not exit in the target vCenter or ESX server. If it exists, the restore fails with status code 2820.

    • If the value is DeleteAllDisksAndReplace, the VM specified in the VMName field must exist in the target vCenter or ESX server. If it does not exist, the restore fails with a NetBackup Status Code 2820.

    The default value is No.

  • PowerOn. Whether to turn on the target VM after the restore, as follows:

    • If the value is Yes, the target VM is powered ON at the end of a successful restore.

    • If the value is No, the target VM is not turned on after the restore.

    If the restore is to an existing VM, the VM is turned off before the virtual disks are attached to it during the restore.

    The default value is No.

  • Datacenter. The name of the VMware for the virtual disk, in pathname format. Optional.

    To restore to a standalone ESXi hypervisor when the backup was through a vCenter, the value of this field must be None.

    If not specified, NetBackup uses the value from the backup.

  • ESX. The name of the ESX host to which NetBackup should restore the virtual disk. Optional.

    If not specified, NetBackup uses the value from the backup.

  • Folder. The pathname of the VM folder to which NetBackup should restore the virtual disk. Optional.

    To restore to a standalone ESXi hypervisor when the backup was through a vCenter, the value of this field must be None.

    If not specified, NetBackup uses the value from the backup.

  • ResourcePool/Vapp. The pathname of the resource pool to which NetBackup should restore the virtual disk. If the restore is to a vApp, specify the path of the vApp. Optional.

    If not specified, NetBackup uses the value from the backup.

  • VmxDatastore. The name of the Datastore to which NetBackup should restore the .vmx configuration file and other VM configuration files. This Datastore is also used to create the configuration files for the temporary VM created during restore. You may enclose the name in square brackets but do not have to. Optional.

    If not specified, NetBackup uses the value from the backup.

  • DefaultDiskDatastore. The datastore name to which NetBackup should restore all the virtual disks for In-Place Disk Restore. Optional. If not specified, NetBackup uses the value from the backup. This option is only valid for In-place Disk Restore. If this option is specified for other type of selective disk restore, it is ignored.

Fifth section (required, except when the VMwareVirtualDestination AttachDisksToExistingVM parameter is DeleteAllDisksAndReplace. If this option is specified for In-place Disk Restore, the restore validation fails.): The VMwareVirtualDiskDestination section of the filename is an array that specifies the disks to restore and the restore parameters for those disks. This section is subordinate to the VMwareRestoreParameters section. It can contain one or more sets of the following fields, one set per virtual machine disk. A comma must separate fields in a set, and a comma must separate sets.

  • VirtualDisk. The full pathname of the virtual disk to restore. This path must match exactly the path of the .vmdk file when it was backed up. Required.

  • OverwriteExistingDisk. Whether to overwrite the existing virtual disk or disks on the target VM, as follows:

    • If the value is Yes, overwrite the original virtual disk and retain the disk UUID.

    • If the value is No, restore the virtual disk to the target VM as a new disk. VMware assigns a new UUID to the disk.

    The default value is No.

  • Datastore. The name of the Datastore that is the destination for the restore. You may enclose the name in square brackets but do not have to. (VMware generates the Datastore pathname using the naming conventions for the VM.) Optional.

    For a restore of virtual disks to a datastore cluster, specify the name of the datastore cluster in this field.

    If not specified, NetBackup uses the value that is specified in the Path field. If neither Datastore or Path are specified, NetBackup uses the Datastore from the backup image.

  • Path. The full pathname to the restore destination for the virtual disk, in the following format:

    [datastore_name] MyVM/MyVM.vmdk


    If you specify a Path and it is not available or a disk already exists at that path, the restore fails. If neither Datastore or Path are specified, NetBackup uses the Datastore from the backup image.

  • Provisioning. The disk provisioning for this specific disk: thin, thick, eagerzeroed, or unknown. Optional.

    If not specified, the NetBackup uses the DefaultDiskProvisioning value.

  • Controller

    The virtual disk controller to which the disk is attached in the original VM. Optional

    This field is informational only to help you determine which virtual disk or disks to restore. The value is not used during a restore.

Sixth section (optional). The VMwareAdvancedRestoreOptions section of the file specifies parameters to restore to an existing VM. This section is subordinate to the VMwareRestoreParameters section.

  • DeleteRestoredVMOnError. Whether to delete the temporary VM if the disk attach operation fails, as follows:

    • If the value is Yes, delete the temporary VM.

    • If the value is No, do not delete the temporary VM. If the disks are not successfully attached to the target VM, you can access the data on the temporary VM.

    The default value is No. Optional.

  • VMShutdownWaitSeconds. For restores to an existing VM, the restore process shuts down the target virtual machine before it attaches the disk or disks. The duration of the shutdown operation depends on the VMware workload. Use this parameter to specify how long the restore process should wait for shutdown before giving up on restore.

    The default value is 900 seconds (15 minutes). Optional.

-restorespecout filename

Specifies the pathname of the file in which nbrestorevm writes the parameters of the individual virtual machine disk or disks that you want to restore. By default, nbrestorevm creates the file in the current working directory. To specify the backup image from which to obtain the parameters, use the -backupid option or the -s and -e options. If you specify the -s and -e options, NetBackup uses the most exact backup in that date range.

Edit the file so that it contains the appropriate information. Ensure that the VMName field contains the name for the new VM. Ensure that the VMwareVirtualDiskDestination section of the file contains only the virtual machine disk or disks that you want to restore. Use the edited file as the input file for the -restorespec option, which restores the virtual machine disk or disks that are identified in the file.

By default, nbrestorevm creates the file in the current working directory. To create the file in a different directory, specify a pathname for the filename. That path must be in the NetBackup allowed list of paths.

For more information on how to add a custom path, see the "BPCD_ALLOWED_PATH option for NetBackup servers and clients" course in the NetBackup Administrator's Guide, Volume I.

-S master_server

Specifies a different master server to restore a virtual machine from a backup that was made by that master.

-s mm/dd/yyyy [hh:mm:ss] -e mm/dd/yyyy [hh:mm:ss]

Limits the selectable backup images to those with timestamps that fall within the specified period. NetBackup selects the latest suitable backup image within the range. Specifies the start date (-s) and end date (-e) for the listing. The start date and end date signify the time range to search for a valid backup image. The latest valid backup image within the specified time range is used to perform restores. These options are used with all functions except the BMR VM conversion.

-temp_location temp_location

Specifies a temporary datastore on the VM host server where all writes occur until the virtual machine is restored. All writes occur on this datastore until Storage vMotion is complete or until you are finished with the virtual machine (such as for troubleshooting). This datastore must exist before you run nbrestorevm. This option can be used only with -ir_activate. This option is used only with Instant Recovery.

- validate -restorespec filename

Validates the virtual machine disk restore parameters in the filename. The -restorespec option is required, and it must follow the -validate option.

For a description of the filename, see the -restorespec option description.


Restores a vCloud virtual machine. This option is required when you restore to the original location or to an alternate location in vCloud.


Restores a vCloud virtual machine by using the datastore with the largest available space. This option applies only to the restore operations that are not directed to the original location.


Overwrites the existing vCloud vApp.


Redirects the vCloud restore.


Removes the vApp if you use the -vcdtemplate option to save the vApp as a template.


Restores a vCloud virtual machine to an existing vCloud vApp. This option is required when you restore to an existing vApp including an original location restore.


Restores a vCloud virtual machine as a template.

-veconfig ve_config_filepath

Full (absolute) file path that contains the virtual environment details in param=value format. A veconfig file typically contains the following entries:

network="VM Network"
harddisk=1:"storage1 (2)"
harddisk=2:"storage2 (1)"

The following are notes regarding these entries:

  • The folder, resourcepool, and diskformat fields are optional.

  • The VM conversion on a standalone esx server uses the following values:

  • To create all VMDKs corresponding to disks on the same datastore, define the datastore name by using the entry datastore="datastoreName".

  • To create VMDKs on different datastores, populate the veconfig file as shown in the trial file above (harddisk=0...).


Disk format of the restored disk will be 'eager zero'


Restores the VMDK files as flat disks.

-vmhost vm_host

Specifies the VM host on which the virtual machines were mounted when you reactivate virtual machines.


Restores a Hyper-V virtual machine at the original location.


Restores a Hyper-V virtual machine to a new location.


Restores Hyper-V virtual machine files to a staging location.


Restores the BIOS UUID of the virtual machine instead of creating a new one.

For VMware: Restores the BIOS UUID of the virtual machine instead of creating a new one.

For Hyper-V: Restores the GUID of the virtual machine instead of creating a new one.

For Hyper-V, when you restore to the original location or to a staging location, the virtual machine's original GUID is restored. This behavior is true even if the vmid option is omitted.


Retains the Instance UUID of the original virtual machine (note that the Instance UUID is a vCenter specific unique identifier of a virtual machine). The virtual machine is restored with the same Instance UUID that it had when it was backed up.

If the restore of the virtual machine is to a standalone ESXi host, this option is ignored.

If a virtual machine with the same Instance UUID exists at the target restore location, NetBackup assigns a new UUID to the virtual machine.


Retains the hardware version upon recovery. This option applies only to VMware VM recovery.


Generate new virtual machine disk UUIDs during an instant recovery. Use this option with the - ir_activate option.

The VMs that activate with this option do not retain the new vmdk UUID during a subsequent - ir_reactivate operation. In such a scenario, the VMDKs revert to their UUIDs at the time of the backup.


Specifies that you do not want to restore the common files when you restore the Hyper-V virtual machine.


Automatically powers up the virtual machine after the restore operation.

-vmproxy VMware_access_host

Specifies the VMware access host. It overrides the default VMProxy used for backing up the virtual machines.

Storage lifecycle policies (SLPs) can use Auto Image Replication to replicate a virtual machine backup image to another NetBackup domain. To restore the virtual machine from the replicated image, you must include the -vmproxy option. Use the -vmproxy option to specify the backup host (access host) that is in the domain where the virtual machine was replicated.


Removes any mounted removable devices such as cd-rom or dvd-rom images.

-vmserver VMServer

Specifies a different target location for the restore operation (for example, ESX server or vCenter). It overrides the default VM server used for backing up the virtual machines. To restore to the same vCenter where the virtual machine originally resided, omit this option.


Strips the network interface of the virtual machine.


Strips the VMware tags from the restore.


Disk format of the restored disk will be 'thin'.

-vmtm vm_transport_mode

Specifies the VMware transport mode. An example of the format of vm_transport_mode is san:hotadd:nbd:nbdssl.


Allows the VMware VMDK files to be restored to the same datastore where the VMX file is specified. A rename file that assigns a different vmdk file path overrides this option.


Restores a VMware virtual machine.

-w [hh:mm:ss]

Causes NetBackup to wait for a completion status from the server before it returns you to the system prompt.

The required date and time values format in NetBackup commands varies according to your locale. The /usr/openv/msg/.conf file (UNIX) and the install_path\VERITAS\msg\LC.CONF file (Windows) contain information such as the date-time formats for each supported locale. The files contain specific instructions on how to add or modify the list of supported locales and formats.

See the "About specifying the locale of the NetBackup installation" course in the NetBackup Administrator's Guide, Volume II.

You can optionally specify a wait time in hours, minutes, and seconds. The maximum wait time you can specify is 23:59:59. If the wait time expires before the restore is complete, the command exits with a timeout status. The restore, however, still completes on the server.

If you specify 0 or do not specify a time, the wait time is indefinite for the completion status.

Sun, 26 Jun 2022 12:00:00 -0500 en text/html
Killexams : Applied DNA Initiates Analytical Validation of PCR-based Diagnostic Test Specific to Monkeypox Virus

STONY BROOK, N.Y., August 02, 2022--(BUSINESS WIRE)--Applied DNA Sciences, Inc. (NASDAQ: APDN) (the "Company"), a leader in polymerase chain reaction ("PCR")-based technologies, announced that its wholly-owned clinical laboratory subsidiary, Applied DNA Clinical Labs, LLC ("ADCL"), has initiated analytical validation of a Company-developed, PCR-based monkeypox virus test that is specific for the genetic signature of the monkeypox virus. The test has been developed as a type of NYSDOH Laboratory Developed Test ("LDT"). If the test is validated by ADCL, a validation package will be submitted to New York State Department of Health ("NYSDOH") for approval. If approved, the test will be used to power ADCL’s monkeypox testing services.

ADCL’s monkeypox test utilizes an A17L gene-target specific to monkeypox that enables the qualitative detection and differentiation of monkeypox virus from other non-variola orthopoxviruses using real-time PCR. If validated and approved, testing will be performed at ADCL’s CLEP/CLIA molecular diagnostics laboratory in Stony Brook, N.Y., utilizing its established and proven workflows to ensure accurate results and competitive turn-around-times.

"Based on our experience with the COVID-19 pandemic, we are keenly aware of the crucial role PCR-based diagnostic tools can play in responding and helping to control public health outbreaks. With a proven workflow and testing services born of COVID-19, upon test approval, ADCL stands ready to apply its testing capacity in service of New Yorkers’ health," stated Dr. James A. Hayward, president and CEO of Applied DNA Sciences.

To learn more about safeCircle™, ADCL's testing platform: click here

About Applied DNA Sciences

Applied DNA Sciences is a biotechnology company developing technologies to produce and detect deoxyribonucleic acid ("DNA"). Using PCR to enable both the production and detection of DNA, we operate in three primary business markets: (i) the manufacture of DNA for use in nucleic acid-based therapeutics; (ii) the detection of DNA in molecular diagnostics testing services; and (iii) the manufacture and detection of DNA for industrial supply chain security services.

Visit for more information. Follow us on Twitter and LinkedIn. Join our mailing list.

The Company’s common stock is listed on NASDAQ under the ticker symbol ‘APDN,’ and its publicly traded warrants are listed on OTC under the ticker symbol ‘APPDW.’

Forward-Looking Statements

The statements made by Applied DNA in this press release may be "forward-looking" in nature within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934 and the Private Securities Litigation Reform Act of 1995. Forward-looking statements describe Applied DNA's future plans, projections, strategies, and expectations, and are based on assumptions and involve a number of risks and uncertainties, many of which are beyond the control of Applied DNA. real results could differ materially from those projected due to its history of net losses, limited financial resources, limited market acceptance, the possibility that Applied DNA’s testing services could become obsolete or have their utility diminished and the unknown amount of revenues and profits that will results from Applied DNA's testing services. Further, the uncertainties inherent in research and development, future data and analysis, including whether any of Applied DNA's current or future diagnostic candidates will advance further in the research and/or validation process or receiving authorization, clearance or approval from the FDA, equivalent foreign regulatory agencies and/or the NYSDOH, and whether and when, if at all, they will receive final authorization, clearance or approval from the FDA, equivalent foreign regulatory agencies and/or NYSDOH, the unknown outcome of any applications or requests to FDA, equivalent foreign regulatory agencies and/or the NYSDOH, disruptions in the supply of raw materials and supplies, and various other factors detailed from time to time in Applied DNA's SEC reports and filings, including our Annual Report on Form 10-K filed on December 9, 2021, its Quarterly Report on Form 10-Qs filed on February 10, 2022 and May 12, 2022 and other reports we file with the SEC, which are available at Applied DNA undertakes no obligation to update publicly any forward-looking statements to reflect new information, events or circumstances after the date hereof or to reflect the occurrence of unanticipated events, unless otherwise required by law.

View source version on


Investor Relations Contact: Sanjay M. Hurry, 917-733-5573,
Program Contact: Mike Munzer, 631-240-8814,
Twitter: @APDN

Tue, 02 Aug 2022 00:03:00 -0500 en-US text/html
Killexams : Weaving a New Web

In 1969 scientists at the University of California, Los Angeles, transmitted a couple of bits of data between two computers, and thus the Internet was born. Today about 2 billion people access the Web regularly, zipping untold exabytes of data (that’s 10^18 pieces of information) through copper and fiber lines around the world. In the United States alone, an estimated 70 percent of the population owns a networked computer. That number grows to 80 percent if you count smartphones, and more and more people jump online every day. But just how big can the information superhighway get before it starts to buckle? How much growth can the routers and pipes handle? The challenges seem daunting. The current Internet Protocol (IP) system that connects global networks has nearly exhausted its supply of 4.3 billion unique addresses. Video is projected to account for more than 90 percent of all Internet traffic by 2014, a sudden new demand that will require a major increase in bandwidth. Malicious software increasingly threatens national security. And consumers may face confusing new options as Internet service providers consider plans to create a “fast lane” that would prioritize some Web sites and traffic types while others are routed more slowly.

Fortunately, thousands of elite network researchers spend their days thinking about these thorny issues. Last September DISCOVER and the National Science Foundation convened four of them for a lively discussion, hosted by the Georgia Institute of Technology in Atlanta, on the next stage of Internet evolution and how it will transform our lives. DISCOVER editor in chief Corey S. Powell joined Cisco’s Paul Connolly, who works with Internet service providers (ISPs); Georgia Tech computer scientist Nick Feamster, who specializes in network security; William Lehr of MIT, who studies wireless technology, Internet architecture, and the economic and policy implications of online access; and Georgia Tech’s Ellen Zegura, an expert on mobile networking (click here for video of the event).

Powell: Few people anticipated Google’s swift rise, the vast influence of social media, or the Web’s impact on the music, television, and publishing industries. How do we even begin to map out what will come next?

Lehr: One thing the Internet has taught us thus far is that we can’t predict it. That’s wonderful because it allows for the possibility of constantly reinventing it.

Zegura: Our response to not being able to predict the Internet is to try to make it as flexible as possible. We don’t know for sure what will happen, so if we can create a platform that can accommodate many possible futures, we can position ourselves for whatever may come. The current Internet has held up quite well, but it is ready for some changes to prepare it to serve us for the next 30, 40, or 100 years. By building the ability to innovate into the network, we don’t have to know exactly what’s coming down the line. That said, Nick and others have been working on a test bed called GENI, the Global Environment for Network Innovations project that will allow us to experiment with alternative futures.

Powell: Almost like using focus groups to redesign the Internet?

Zegura: That’s not a bad analogy, although some of the testing might be more long-term than a traditional focus group.

Powell: What are some major online trends, and what do they suggest about where we are headed?

Feamster: We know that paths are getting shorter: From point A to point B, your traffic is going through fewer and fewer Internet service providers. And more and more data are moving into the cloud. Between now and 2020, the number of people on the Internet is expected to double. For those who will come online in the next 10 years or so, we don’t know how they’re going to access the Internet, how they’re going to use it, or what kinds of applications they might use. One trend is the proliferation of mobile devices: There could be more than a billion cell phones in India alone by 2015.

Powell: So there’s a whole universe of wireless connectivity that could potentially become an Internet universe?

Feamster: Absolutely. We know things are going to look vastly different from people sitting at desktops or laptops and browsing the Web. Also, a lot of Internet innovation has come not from research but from the private sector, both large companies and start-ups. As networking researchers, we should be thinking about how best to design the network substrate to allow it to evolve, because all we know for sure is that it’s going to keep changing.

Powell: What kind of changes and challenges do you anticipate?

Lehr: We’re going to see many different kinds of networks. As the Internet pushes into the developing world, the emphasis will probably be on mobile networks. For now, the Internet community is still very U.S.-centric. Here, we have very strong First Amendment rights (see “The Five Worst Countries for Surfing the Web,” page 5), but that’s not always the case elsewhere in the world, so that’s something that could cause friction as access expands.

Powell: Nearly 200 million Americans have a broadband connection at home. The National Broadband Plan proposes that everyone here should have affordable broadband access by 2020. Is private industry prepared for this tremendous spike in traffic?

Connolly: Our stake in the ground is that global traffic will quadruple by 2014, and we believe 90 percent of consumer traffic will be video-based. The question is whether we can deal with all those bits at a cost that allows stakeholders to stay in business. The existing Internet is not really designed to handle high volumes of media. When we look at the growth rate of bandwidth, it has followed a consistent path, but you have to focus on technology at a cost. If we can’t hit a price target, it doesn’t go mainstream. When we hit the right price, all of a sudden people say, “I want to do that,” and away we go.

Powell: As networks connect to crucial systems—such as medical equipment, our homes, and the electrical grid—disruptions will become costly and even dangerous. How do we keep everything working reliably?

Lehr: We already use the cyber world to control the real world in our car engines and braking systems, but when we start using the Internet, distributed networks, and resources on some cloud to make decisions for us, that raises a lot of questions. One could imagine all kinds of scenarios. I might have an insulin pump that’s controlled over the Internet, and some guy halfway around the world can hack into it and change my drug dosage.

Feamster: The late Mark Weiser, chief technologist at the Xerox Palo Alto Research Center, said the most profound technologies are the ones that disappear. When we drive a car, we’re not even aware that there’s a huge network under the hood. We don’t have to know how it works to drive that car. But if we start networking appliances or medical devices and we want those networks to disappear in the same way, we need to rely on someone else to manage them for us, so privacy is a huge concern. How do I provide someone visibility and access so they can fix a problem without letting them see my personal files, or use my printer, or open my garage door? The issues that span usability and privacy are going to become increasingly important.

Zegura: I would not be willing to have surgery over the Internet today because it’s not secure or reliable enough. Many environments are even more challenging: disaster situations, remote areas, military settings. But many techniques have been developed to deal with places that lack robust communications infrastructure. For instance, my collaborators and I have been developing something called message ferries. These are mobile routers, nodes in the environment that enable communication. Message ferries could be on a bus, in a backpack, or on an airplane. Like a ferry picks up passengers, they pick up messages and deliver them to another region.

Powell: Any takers for surgery over the Internet? Show of hands?

Lehr: If I’m in the Congo and I need surgery immediately, and that’s the only way they can provide it to me, sure. Is it ready for prime time? Absolutely not.

Powell: Many Web sites now offer services based on “cloud computing.” What is the concept behind that?

Feamster: One of the central tenets of cloud computing is virtualization. What that means is that instead of having hardware that’s yours alone, you share it with other people, whom you might not trust. This is evident in Gmail and Google Docs. Your personal documents are sitting on the same machine with somebody else’s. In this kind of situation, it’s critical to be able to track where data go. Several of my students are working on this issue.

Powell: With more and more documents moving to the cloud, aren’t there some complications from never knowing exactly where your data are or what you’re connecting to?

Lehr: A disconnect between data and physical location puts providers in a difficult position—for example, Google deciding what to do with respect to filtering search results in China. It’s a global technology provider. It can potentially influence China’s rules, but how much should it try to do that? People are reexamining this issue at every level.

Powell: In one exact survey, 65 percent of adults in 14 countries reported that they had been the victim of some type of cyber crime. What do people need to know to protect themselves?

Feamster: How much do you rely on educating users versus shielding them from having to make sensitive decisions? In some instances you can prevent people from making mistakes or doing malicious things. Last year, for instance, Goldman Sachs was involved in a legal case in which the firm needed to show that no information had been exchanged between its trading and accounting departments. That’s the kind of thing that the network should just take care of automatically, so it can’t happen no matter what users do.

Zegura: I agree that in cases where it’s clear that there is something people should not do, and we can make it impossible to do it, that’s a good thing. But we can’t solve everything that way. There is an opportunity to help people understand more about what’s going on with networks so they can look out for themselves. A number of people don’t understand how you can get e-mail that looks like it came from your mother, even though it didn’t. The analogy is that someone can take an envelope and write your name on it, write your mother’s name on the return address, and stick it in your mailbox. Now you have a letter in your mailbox that looks like it came from your mother, but it didn’t. The same thing can happen with e-mail. It’s possible to write any address on an Internet packet so it looks like it came from somewhere else. That’s a very basic understanding that could help people be much smarter about how they use networks.

Audience: How is the Internet changing the way we learn?

Feamster: Google CEO Eric Schmidt once gave an interview in which he was talking about how kids are being quizzed on things like country capitals (video). He essentially said, “This is ridiculous. I can just go to Google and search for capitals. What we really should be teaching students is where to find answers.” That’s perhaps the viewpoint of someone who is trying to catalog all the world’s information and says, “Why don’t you use it?” But there’s something to be said for it—there’s a lot of data at our fingertips. Maybe education should shift to reflect that.

Audience: Do you think it will ever be possible to make the Internet totally secure?

Feamster: We’ll never have perfect security, but we can make it tougher. Take the problem of spam. You construct new spam filters, and then the spammers figure out that you’re looking for messages sent at a certain time or messages of a certain size, so they have to shuffle things up a bit. But the hope is that you’ve made it harder. It’s like putting up a higher fence around your house. You won’t stop problems completely, but you can make break-ins inconvenient or costly enough to mitigate them.

Audience: Should there be limits on how much personal information can be collected online?

Zegura: Most of my undergraduate students have a sensitivity to private information that’s very different from mine. But even if we’re savvy, we can still be unaware of the personal data that some companies collect. In general, it needs to be much easier for people to make informed choices.

Feamster: The thing that scares me the most is what happens when a company you thought you trusted gets bought or goes out of business and sells all of your data to the lowest bidder. There are too few regulations in place to protect us, even if we understand the current privacy policies.

Lehr: Technologically, Bill Joy [co-founder of Sun Microsystems] was right when he said, “Privacy is dead; just get over it.” Privacy today can no longer be about whether someone knows something, because we can’t regulate that effectively. What matters now is what they can do with what they know.

Audience: Wiring society creates the capacity to crash society. The banking system, utilities, and business administration are all vulnerable. How do we meaningfully weigh the benefits against the risks?

Lehr: How we decide to use networks is very important. For example, we might decide to have separate networks for certain systems. I cannot risk some kid turning on a generator in the Ukraine and blowing something up in Kentucky, so I might keep my electrical power grid network completely separate. This kind of question engages more than just technologists. A wider group of stakeholders needs to weigh in.

Connolly: You always have to balance the good versus the potential for evil. Occasionally big blackouts in the Northeast cause havoc, but if we decided not to have electricity because of that risk, that would be a bad decision, and I don’t think it’s any worse in the case of the Internet. We have to be careful, but there’s so much possibility for enormous good. The power of collaboration, with people working together through the Internet, gives us tremendous optimism for the kinds of issues we will be able to tackle.

The Conversation in Context: 12 Ideas That Will Reshape the Way We Live and Work Online

1. Change how the data flow
A good place to start is with the overburdened addressing system, known as IPv4. Every device connected to the Internet, including computers, smartphones, and servers, has a unique identifier, or Internet protocol (IP) address. “Whenever you type in the name of a Web site, the computer essentially looks at a phone book of IP addresses,” explains Craig Labovitz, chief scientist at Arbor Networks, a software and Internet company. “It needs a number to call to connect you.” Trouble is, IPv4 is running out of identifiers. In fact, the expanding Web is expected to outgrow IPv4’s 4.3 billion addresses within a couple of years. Anticipating this shortage, researchers began developing a new IP addressing system, known as IPv6, more than a decade ago. IPv6 is ready to roll, and the U.S. government and some big Internet companies, such as Google, have pledged to switch over by 2012. But not everyone is eager to follow. For one, the jump necessitates costly upgrades to hardware and software. Perhaps a bigger disincentive is the incompatibility of the two addressing systems, which means companies must support both versions throughout the transition to ensure that everyone will be able to access content. In the meantime, IPv4 addresses, which are typically free, may be bought and sold. For the average consumer, Labovitz says, that could translate to pricier Internet access.

2. Put the next internet to the test
In one GENI experiment, Stanford University researcher Kok-Kiong Yap is researching a futuristic Web that seamlessly transitions between various cellular and WiFi networks, allowing smartphones to look for an alternative connection whenever the current one gets overwhelmed. That’s music to the ears of everyone toting an iPhone.

3. Move data into the cloud
As Nick Feamster says, the cloud is an increasingly popular place to store data. So much so, in fact, that technology research company Gartner predicts the estimated value of the cloud market, including all software, advertising, and business transactions, will exceed $150 billion by 2013. Why the boom? Convenience. At its simplest, cloud computing is like a giant, low-cost, low-maintenance storage locker. Centralized servers, provided by large Internet companies like Microsoft, Google, and Amazon, plus scores of smaller ones worldwide, let people access data and applications over the Internet instead of storing them on personal hard drives. This reduces costs for software licensing and hardware.

4. Settle who owns the internet
While much of the data that zips around the Internet is free, the routers and pipes that enable this magical transmission are not. The question of who should pay for rising infrastructure costs, among other expenses, is at the heart of the long-standing net neutrality debate. On the one side, Internet service providers argue that charging Web sites more for bandwidth-hogging data such as video will allow them to expand capacity and deliver data faster and more reliably. Opponents counter that such a tiered or “pay as you go” Internet would unfairly favor wealthier content providers, allowing the richest players to indirectly censor their cash-strapped competition. So which side has the legal edge? Last December the Federal Communications Commission approved a compromise plan that would allow ISPs to prioritize traffic for a fee, but the FCC promises to police anticompetitive practices, such as an ISP’s mistreating, say, Netflix, if it wants to promote its own instant-streaming service. The extent of the FCC’s authority remains unclear, however, and the ruling could be challenged as early as this month.

5. Understand what can happen when networks make decisions for us
In November Iranian president Mahmoud Ahmadinejad confirmed that the Stuxnet computer worm had sabotaged national centrifuges used to enrich nuclear fuel. Experts have determined that the malicious code hunts for electrical components operating at particular frequencies and hijacks them, potentially causing them to spin centrifuges at wildly fluctuating rates. Labovitz of Arbor Networks says, “Stuxnet showed how skilled hackers can militarize technology.”

6. Get ready for virtual surgery
Surgeon Jacques Marescaux performed the first trans-Atlantic operation in 2001 when he sat in an office in New York and delicately removed the gall bladder of a woman in Strasbourg, France. Whenever he moved his hands, a robot more than 4,000 miles away received signals via a broadband Internet connection and, within 15-hundredths of a second, perfectly mimicked his movements. Since then more than 30 other patients have undergone surgery over the Internet. “The surgeon obviously needs a ensure that the connection won’t be interrupted,” says surgeon Richard Satava of the University of Washington. “And you need a consistent time delay. You don’t want to see a robot continually change its response time to your hand motions.”

7. Bring on the message ferries
A message ferry is a mobile device or Internet node that could relay data in war zones, disaster sites, and other places lacking communications infrastructure.

8. Don’t share hardware with people whom you might not trust
Or who might not trust you. The tenuous nature of free speech on the Internet cropped up in December when Amazon Web Services booted WikiLeaks from its cloud servers. Amazon charged that the nonprofit violated its terms of service, although the U.S. government may have had more to do with the decision than Amazon admits. WikiLeaks, for its part, shot back on Twitter, “If Amazon are [sic] so uncomfortable with the First Amendment, they should get out of the business of selling books.”

Unfortunately for WikiLeaks, Amazon is not a government agency, so there is no First Amendment case against it, according to Internet scholar and lawyer Wendy Seltzer of Princeton University. You may be doing something perfectly legal on Amazon’s cloud, Seltzer explains, and Amazon could provide you the boot because of government pressure, protests, or even too many service calls. “Service providers provide end users very little recourse, if any,” she observes. That’s why people are starting to think about “distributed hosting,” in which no one company has total power, and thus no one company controls freedom of speech.

9. Make cloud computing secure Nick Feamster’s strategy is to tag sensitive information with irrevocable digital labels. For example, an employee who wants only his boss to read a message could create a label designating it as secret. That label would remain with the message as it passed through routers and servers to reach the recipient, preventing a snooping coworker from accessing it. “The file could be altered, chopped in two, whatever, and the label would remain with the data,” Feamster says. The label would also prohibit the boss from relaying the message to someone else. Feamster expects to unveil a version of his labeling system, called Pedigree, later this year.

10. Manage your junk mail A lot of it. Spam accounts for about 85 percent of all e-mail. That’s more than 50 billion junk messages a day, according to the online security company Symantec.

11. Privacy is dead? Don’t believe it As we cope with the cruel fact that the Internet never forgets, researchers are looking toward self-destructing data as a possible solution. Vanish, a program created at the University of Washington, encodes data with cryptographic tags that degrade over time like vanishing ink. A similar program, aptly called TigerText, allows users to program text messages with a “destroy by” date that activates once the message is opened. Another promising option, of course, is simply to exercise good judgment.

12. Network to make a better world Crowdsourcing science projects that harness the power of the wired masses have tremendous potential to quickly solve problems that would otherwise take years to resolve. Notable among these projects is Foldit (, an engaging online puzzle created by Seth Cooper of the University of Washington and others that tasks gamers with figuring out the shapes of hundreds of proteins, which in turn can lead to new medicines. Another is the UC Berkeley Space Sciences Lab’s Stardust@home project (, which has recruited about 30,000 volunteers to scour, via the Internet, microscope images of interstellar dust particles collected from the tail of a comet that may hold clues to how the solar system formed. And Cornell University’s NestWatch ( educates people about bird breeding and encourages them to submit nest records to an online database. To date, the program has collected nearly 400,000 nest records on more than 500 bird species.

Check out
citizenscience for more projects.

Andrew Grant and Andrew Moseman

The Five Worst Countries for Surfing the Web


Government control of the Internet makes using the Web in China particularly limiting and sometimes dangerous. Chinese officials, for instance, imprisoned human rights activist Liu Xiaobo in 2009 for posting his views on the Internet and then blocked news Web sites that covered the Nobel Peace Prize ceremony honoring him last December. Want to experience China’s censorship firsthand? Go to, the country’s most popular search engine, and type in “Tiananmen Square massacre.”

North Korea
It’s hard to surf the Web when there is no Web to surf. Very few North Koreans have access to the Internet; in fact, due to the country’s isolation and censorship, many of its citizens do not even know it exists.

Burma is the worst country in which to be a blogger, according to a 2009 report by the Committee to Protect Journalists. Blogger Maung Thura, popularly known in the country as Zarganar, was sentenced to 35 years in prison for posting content critical of the government’s aid efforts after a hurricane.


The Iranian government employs an extensive Web site filtering system, according to the press freedom group Reporters Without Borders, and limits Internet connection speeds to curb the sharing of photos and videos. Following the controversial 2009 reelection of president Mahmoud Ahmadinejad, protesters flocked to Twitter to voice their displeasure after the government blocked various news and social media Web sites.


Only 14 percent of Cubans have access to the Internet, and the vast majority are limited to a government-controlled network made up of e-mail, an encyclopedia, government Web sites, and selected foreign sites supportive of the Cuban dictatorship. Last year Cuban officials accused the United States of encouraging subversion by allowing companies to offer Internet communication services there.

Andrew Grant

Wed, 06 Jul 2011 05:13:00 -0500 en text/html
Killexams : Interview: Frank Cohen on FastSOA

InfoQ today publishes a one-chapter excerpt from Frank Cohen's book  "FastSOA". On this occasion, InfoQ had a chance to talk to Frank Cohen, creator of the FastSOA methodology, about the issues when trying to process XML messages, scalability, using XQuery in the middle tier, and document-object-relational-mapping.

Can you briefly explain the ideas behind "FastSOA"?

Frank Cohen: For the past 5-6 years I have been investigating the impact an average Java developer's choice of technology, protocols, and patterns for building services has on the scalability and performance of the resulting application. For example, Java developers today have a choice of 21 different XML parsers! Each one has its own scalability, performance, and developer productivity profile. So a developer's choice on technology makes a big impact at runtime.

I looked at distributed systems that used message oriented middleware to make remote procedure calls. Then I looked at SOAP-based Web Services. And most recently at REST and AJAX. These experiences led me to look at SOA scalability and performance built using application server, enterprise service bus (ESB,) business process execution (BPEL,) and business integration (BI) tools. Across all of these technologies I found a consistent theme: At the intersection of XML and SOA are significant scalability and performance problems.

FastSOA is a test methodology and set of architectural patterns to find and solve scalability and performance problems. The patterns teach Java developers that there are native XML technologies, such as XQuery and native XML persistence engines, that should be considered in addition to Java-only solutions.

InfoQ: What's "Fast" about it? ;-)

FC: First off, let me describe the extent of the problem. Java developers building Web enabled software today have a lot of choices. We've all heard about Service Oriented Architecture (SOA), Web Services, REST, and AJAX techniques. While there are a LOT of different and competing definitions for these, most Java developers I speak to expect that they will be working with objects that message to other objects - locally or on some remote server - using encoded data, and often the encoded data is in XML format.

The nature of these interconnected services we're building means our software needs to handle messages that can be small to large and simple to complex. Consider the performance penalty of using a SOAP interface and streams XML parser (StAX) to handle a simple message schema where the message size grows. A modern and expensive multi-processor server that easily serves 40 to 80 Web pages per second serves as little as 1.5 to 2 XML requests per second.

Scalability Index

Without some sort of remediation Java software often slows to a crawl when handling XML data because of a mismatch between the XML schema and the XML parser. For instance, we checked one SOAP stack that instantiated 14,385 Java objects to handle a request message of 7000 bytes that contains 200 XML elements.

Of course, titling my work SlowSOA didn't sound as good. FastSOA offers a way to solve many of the scalability and performance problems. FastSOA uses native XML technology to provide service acceleration, transformation, and federation services in the mid-tier. For instance, an XQuery engine provides a SOAP interface for a service to handle decoding the request, transform the request data into something more useful, and routes the request to a Java object or another service.

InfoQ: One alternative to XML databinding in Java is the use of XML technologies, such as XPath or XQuery. Why muddy the water with XQuery? Why not just use Java technology?

FC:We're all after the same basic goals:

  1. Good scalability and performance in SOA and XML environments.
  2. Rapid development of software code.
  3. Flexible and easy maintenance of software code as the environment and needs change.

In SOA, Web Service, and XML domains I find the usual Java choices don't get me to all three goals.

Chris Richardson explains the Domain Model Pattern in his book POJOs in Action. The Domain Model is a popular pattern to build Web applications and is being used by many developers to build SOA composite applications and data services.


The Domain Model divides into three portions: A presentation tier, an application tier, and a data tier. The presentation tier uses a Web browser with AJAX and RSS capabilities to create a rich user interface. The browser makes a combination of HTML and XML requests to the application tier. Also at the presentation tier is a SOAP-based Web Service interface to allow a customer system to access functions directly, such as a parts ordering function for a manufacturer's service.

At the application tier, an Enterprise Java Bean (EJB) or plain-old Java object (Pojo) implements the business logic to respond to the request. The EJB uses a model, view, controller (MVC) framework - for instance, Spring MVC, Struts or Tapestry - to respond to the request by generating a response Web page. The MVC framework uses an object/relational (O/R) mapping framework - for instance Hibernate or Spring - to store and retrieve data in a relational database.

I see problem areas that cause scalability and performance problems when using the Domain Model in XML environments:

  • XML-Java Mapping requires increasingly more processor time as XML message size and complexity grows.
  • Each request operates the entire service. For instance, many times the user will check order status sooner than any status change is realistic. If the system kept track of the most exact response's time-to-live duration then it would not have to operate all of the service to determine the most previously cached response.
  • The vendor application requires the request message to be in XML form. The data the EJB previously processed from XML into Java objects now needs to be transformed back into XML elements as part of the request message. Many Java to XML frameworks - for instance, JAXB, XMLBeans, and Xerces ? require processor intensive transformations. Also, I find these frameworks challenging me to write difficult and needlessly complex code to perform the transformation.
  • The service persists order information in a relational database using an object-relational mapping framework. The framework transforms Java objects into relational rowsets and performs joins among multiple tables. As object complexity and size grows my research shows many developers need to debug the O/R mapping to Strengthen speed and performance.

In no way am I advocating a move away from your existing Java tools and systems. There is a lot we can do to resolve these problems without throwing anything out. For instance, we could introduce a mid-tier service cache using XQuery and a native XML database to mitigate and accelerate many of the XML domain specific requests.


The advantage to using the FastSOA architecture as a mid-tier service cache is in its ability to store any general type of data, and its strength in quickly matching services with sets of complex parameters to efficiently determine when a service request can be serviced from the cache. The FastSOA mid-tier service cache architectures accomplishes this by maintaining two databases:

  • Service Database. Holds the cached message payloads. For instance, the service database holds a SOAP message in XML form, an HTML Web page, text from a short message, and binary from a JPEG or GIF image.
  • Policy Database. Holds units of business logic that look into the service database contents and make decisions on servicing requests with data from the service database or passing through the request to the application tier. For instance, a policy that receives a SOAP request validates security information in the SOAP header to validate that a user may receive previously cached response data. In another instance a policy checks the time-to-live value from a stock market price quote to see if it can respond to a request from the stock value stored in the service database.

FastSOA uses the XQuery data model to implement policies. The XQuery data model supports any general type of document and any general dynamic parameter used to fetch and construct the document. Used to implement policies the XQuery engine allows FastSOA to efficiently assess common criteria of the data in the service cache and the flexibility of XQuery allows for user-driven fuzzy pattern matches to efficiently represent the cache.

FastSOA uses native XML database technology for the service and policy databases for performance and scalability reasons. Relational database technology delivers satisfactory performance to persist policy and service data in a mid-tier cache provided the XML message schemas being stored are consistent and the message sizes are small.

InfoQ: What kinds of performance advantages does this deliver?

FC: I implemented a scalability test to contrast native XML technology and Java technology to implement a service that receives SOAP requests.

TPS for Service Interface

The test varies the size of the request message among three levels: 68 K, 202 K, 403 K bytes. The test measures the roundtrip time to respond to the request at the consumer. The test results are from a server with dual CPU Intel Xeon 3.0 Ghz processors running on a gigabit switched Ethernet network. I implemented the code in two ways:

  • FastSOA technique. Uses native XML technology to provide a SOAP service interface. I used a commercial XQuery engine to expose a socket interface that receives the SOAP message, parses its content, and assembles a response SOAP message.
  • Java technique. Uses the SOAP binding proxy interface generator from a popular commercial Java application server. A simple Java object receives the SOAP request from the binding, parses its content using JAXB created bindings, and assembles a response SOAP message using the binding.

The results show a 2 to 2.5 times performance improvement when using the FastSOA technique to expose service interfaces. The FastSOA method is faster because it avoids many of the mappings and transformations that are performed in the Java binding approach to work with XML data. The greater the complexity and size of the XML data the greater will be the performance improvement.

InfoQ: Won't these problems get easier with newer Java tools?

FC: I remember hearing Tim Bray, co-inventor of XML, extolling a large group of software developers in 2005 to go out and write whatever XML formats they needed for their applications. Look at all of the different REST and AJAX related schemas that exist today. They are all different and many of them are moving targets over time. Consequently, when working with Java and XML the average application or service needs to contend with three facts of life:

  1. There's no gatekeeper to the XML schemas. So a message in any schema can arrive at your object at any time.
  2. The messages may be of any size. For instance, some messages will be very short (less than 200 bytes) while some messages may be giant (greater than 10 Mbytes.)
  3. The messages use simple to complex schemas. For instance, the message schema may have very few levels of hierarchy (less than 5 children for each element) while other messages will have multiple levels of hierarchy (greater than 30 children.)

What's needed is an easy way to consume any size and complexity of XML data and to easily maintain it over time as the XML changes. This kind of changing landscape is what XQuery was created to address.

InfoQ: Is FastSOA only about improving service interface performance?

FC: FastSOA addresses these problems:

  • Solves SOAP binding performance problems by reducing the need for Java objects and increasing the use of native XML environments to provide SOAP bindings.
  • Introduces a mid-tier service cache to provide SOA service acceleration, transformation, and federation.
  • Uses native XML persistence to solve XML, object, and relational incompatibility.

FastSOA Pattern

FastSOA is an architecture that provides a mid-tier service binding, XQuery processor, and native XML database. The binding is a native and streams-based XML data processor. The XQuery processor is the real mid-tier that parses incoming documents, determines the transaction, communicates with the ?local? service to obtain the stored data, serializes the data to XML and stores the data into a cache while recording a time-to-live duration. While this is an XML oriented design XQuery and native XML databases handle non-XML data, including images, binary files, and attachments. An equally important benefit to the XQuery processor is the ability to define policies that operate on the data at runtime in the mid-tier.


FastSOA provides mid-tier transformation between a consumer that requires one schema and a service that only provides responses using a different and incompatible schema. The XQuery in the FastSOA tier transforms the requests and responses between incompatible schema types.


Lastly, when a service commonly needs to aggregate the responses from multiple services into one response, FastSOA provides service federation. For instance, many content publishers such as the New York Times provide new articles using the Rich Site Syndication (RSS) protocol. FastSOA may federate news analysis articles published on a Web site with late breaking news stories from several RSS feeds. This can be done in your application but is better done in FastSOA because the content (news stores and RSS feeds) usually include time-to-live values that are ideal for FastSOA's mid-tier caching.

InfoQ: Can you elaborate on the problems you see in combining XML with objects and relational databases?

FC: While I recommend using a native XML database for XML persistence it is possible to be successful using a relational database. Careful attention to the quality and nature of your application's XML is needed. For instance, XML is already widely used to express documents, document formats, interoperability standards, and service orchestrations. There are even arguments put forward in the software development community to represent service governance in XML form and operated upon with XQuery methods. In a world full of XML, we software developers have to ask if it makes sense to use relational persistence engines for XML data. Consider these common questions:

  • How difficult is it to get XML data into a relational database?
  • How difficult is it to get relational data to a service or object that needs XML data? Can my database retrieve the XML data with lossless fidelity to the original XML data? Will my database deliver acceptable performance and scalability for operations on XML data stored in the database? Which database operations (queries, changes, complex joins) are most costs in terms of performance and required resources (cpus, network, memory, storage)?

Your answers to these questions forms a criteria by which it will make sense to use a relational database, or perhaps not. The alternative to relational engines are native XML persistence engines such as eXist, Mark Logic, IBM DB2 V9, TigerLogic, and others.

InfoQ: What are the core ideas behind the PushToTest methodology, and what is its relation to SOA?

FC: It frequently surprises me how few enterprises, institutions, and organizations have a method to test services for scalability and performance. One fortune 50 company asked a summer intern they wound up hiring to run a few performance tests when he had time between other assignments to check and identify scalability problems in their SOA application. That was their entire approach to scalability and performance testing.

The business value of running scalability and performance tests comes once a business formalizes a test method that includes the following:

  1. Choose the right set of test cases. For instance, the test of a multiple-interface and high volume service will be different than a service that handles periodic requests with huge message sizes. The test needs to be oriented to address the end-user goals in using the service and deliver actionable knowledge.
  2. Accurate test runs. Understanding the scalability and performance of a service requires dozens to hundreds of test case runs. Ad-hoc recording of test results is unsatisfactory. Test automation tools are plentiful and often free.
  3. Make the right conclusions when analyzing the results. Understanding the scalability and performance of a service requires understanding how the throughput measured as Transactions Per Second (TPS) at the service consumer changes with increased message size and complexity and increased concurrent requests.

All of this requires much more than an ad-hoc approach to reach useful and actionable knowledge. So I built and published the PushToTest SOA test methodology to help software architects, developers, and testers. The method is described on the Web site and I maintain an open-source test automation tool called PushToTest TestMaker to automate and operate SOA tests.

PushToTest provides Global Services to its customers to use our method and tools to deliver SOA scalability knowledge. Often we are successful convincing an enterprise or vendor that contracts with PushToTest for primary research to let us publish the research under an open source license. For example, the SOA Performance kit comes with the encoding style, XML parser, and use cases. The kit is available for free get at: and older kits are at

InfoQ: Thanks a lot for your time.

Frank Cohen is the leading authority for testing and optimizing software developed with Service Oriented Architecture (SOA) and Web Service designs. Frank is CEO and Founder of PushToTest and inventor of TestMaker, the open-source SOA test automation tool, that helps software developers, QA technicians and IT managers understand and optimize the scalability, performance, and reliability of their systems. Frank is author of several books on optimizing information systems (Java Testing and Design from Prentice Hall in 2004 and FastSOA from Morgan Kaufmann in 2006.) For the past 25 years he led some of the software industry's most successful products, including Norton Utilities for the Macintosh, Stacker, and SoftWindows. He began by writing operating systems for microcomputers, helping establish video games as an industry, helping establish the Norton Utilities franchise, leading Apple's efforts into middleware and Internet technologies, and was principal architect for the Sun Community Server. He cofounded (OTC: IINC), and (now Symantec Web Services.) Contact Frank at and

Sun, 05 Jun 2022 15:49:00 -0500 en text/html
Killexams : Critical Insight Launches Endpoint Detection and Response (EDR) Integrations

Customers can rest easier knowing a SOC is monitoring endpoint technologies along with the network and the cloud

Critical Insight, a Managed Detection and Response (MDR) service provider specializing in protecting the data, systems and digital assets of organizations and critical infrastructure, is improving monitoring and alerting capabilities with 9 leading endpoint detection and response product (EDR) integrations.

Critical Insight's integrations allow its Security Operations Center to monitor for threats across the following endpoint detection and response products:

  • Carbon Black Defense
  • Carbon Black (VMware Carbon Black Cloud)
  • Crowdstrike Falcon
  • Fire Eye HX
  • Microsoft Defender for Endpoint
  • Palo Alto Cortex XDR
  • Sentinel One
  • Sophos
  • Symantec ATP

By combining network, cloud, and endpoint monitoring, Critical Insight can detect, validate, and provide thorough guidance on responding to active threats as well as preventing them in the future. Critical Insight has built a process to rapidly add integrations with other EDR products when requested, so if your solution is not listed, we have a process to rapidly add those capabilities.

"When we partner with a customer to detect and respond to attacks, we want to leverage their best-of-breed endpoint tools they already have and not require them to install a proprietary agent alongside their chosen EDR solution," said Critical Insight Chief Product Officer Fred Langston. He added, "By combining the telemetry and alerting from the endpoint with our best-of-breed network and cloud monitoring capabilities, Critical Insight can detect and alert on threats in virtually any environment, providing a comprehensive security monitoring solution with visibility into and across all your organization's networks, endpoints and cloud infrastructure."

"Our customers in local government, healthcare, and manufacturing have hybrid environments. They have employees working on-site and remotely; they have on-premises IoT, and an increasing cloud and SaaS presence," said Garrett Silver, CEO. "Our goal is to reduce their security risk. Our platform integrates across their existing technology, and our team extends their team."

Critical Insight offers more than SOC-as-a-Service. Our comprehensive security solutions combine technology and the power of human experts to provide value above and beyond what you can achieve with just a product or a singular security service. To learn more about all of the services we provide, visit us at

About Critical Insight
Critical Insight delivers cybersecurity that's critical to your mission. We defend your organization with a personalized blend of MDR, managed, and professional services, to assess, test, and monitor 24x7. IT teams get their day jobs back with a full staff of expertise for less than the cost of one employee. We make cybersecurity a path to progress, from ensuring compliance to driving customer preference. We're committed to defending those who serve us all, so no organization must go without an effective cyber defense. Critical Insight. We Defend. You Thrive.

Find out more at

© 2022 Benzinga does not provide investment advice. All rights reserved.

Ad Disclosure: The rate information is obtained by Bankrate from the listed institutions. Bankrate cannot guaranty the accuracy or availability of any rates shown above. Institutions may have different rates on their own websites than those posted on The listings that appear on this page are from companies from which this website receives compensation, which may impact how, where, and in what order products appear. This table does not include all companies or all available products.

All rates are subject to change without notice and may vary depending on location. These quotes are from banks, thrifts, and credit unions, some of whom have paid for a link to their own Web site where you can find additional information. Those with a paid link are our Advertisers. Those without a paid link are listings we obtain to Strengthen the consumer shopping experience and are not Advertisers. To receive the rate from an Advertiser, please identify yourself as a Bankrate customer. Bank and thrift deposits are insured by the Federal Deposit Insurance Corp. Credit union deposits are insured by the National Credit Union Administration.

Consumer Satisfaction: Bankrate attempts to verify the accuracy and availability of its Advertisers' terms through its quality assurance process and requires Advertisers to agree to our Terms and Conditions and to adhere to our Quality Control Program. If you believe that you have received an inaccurate quote or are otherwise not satisfied with the services provided to you by the institution you choose, please click here.

Rate collection and criteria: Click here for more information on rate collection and criteria.

Tue, 26 Jul 2022 22:15:00 -0500 text/html
250-370 exam dump and training guide direct download
Training Exams List