Full list of C9510-052 real questions questions updated today

Killexams.com offers a person to download 100% totally free C9510-052 real questions to test prior to you registering regarding full copy. Check our C9510-052 examination sim which will enable you to encounter the real C9510-052 Test Prep. Passing the actual C9510-052 examination will become a lot simple for you. killexams.com allows you three or more months free up-dates of C9510-052 Collaborative Lifecycle Management V4 examination queries.

Exam Code: C9510-052 Practice test 2022 by Killexams.com team
Collaborative Lifecycle Management V4
IBM Collaborative test plan
Killexams : IBM Collaborative test plan - BingNews https://killexams.com/pass4sure/exam-detail/C9510-052 Search results Killexams : IBM Collaborative test plan - BingNews https://killexams.com/pass4sure/exam-detail/C9510-052 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And ML Edge Research Strategy Anchored By Enterprise Partnerships And Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Strengthen future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Strengthen quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Strengthen the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : Hop aboard the wiz train; 7 facts about IIT Kanpur you probably didn't know!

Every year, lakhs of aspiring students battle it out to ace one of the fiercest exams in India, to get into the weaver-of-dreams institute, IIT. Apart from being a trusted brand, IITs offer a plethora of opportunities for gaining immense exposure and enhancing technical skills.

Closer home, IIT-Kanpur comes in the creamy layer of all IIT institutes. With notable alumni’s like N.R. Narayana Murthy - founder of Infosys, Lalit Jalan - CEO of Reliance, Ashoke Sen - Padma Shri and Padma Bhushan awardee, Muktesh Pant - CEO of KFC, among many, many others, IIT-K continues the carry the badge of one of the most premier institutes the country.

Read on to know 7 facts about IIT-K that would free you of the age-old question 'Yeh IIT-JEE itna tough kyun hai yaaar?'

Started from a room in the HBT Institute in Kanpur, and now we're here

Established in 1959, Indian Institute of Technology (IIT), Kanpur is one of eminent technical institutes of India. It would be a shocker to know for many IIT aspirants that the esteemed institute commenced its operations, in a room in the canteen Building of the Harcourt Butler Technological Institute at Agricultural Gardens in Kanpur. The esteemed institute was moved to its present location in 1963.

Established under Kanpur Indo-American Programme (KIAP)

IIT Kanpur was established under the Kanpur Indo-American Programme (KIAP) that was a conglomerate of nine leading American universities : M.I.T, University of California, Berkeley, California Institute of Technology, Princeton University, Carnegie Institute of Technology, University of Michigan, Ohio State University, Case Institute of Technology and Purdue University.

First institute to offer computer science education

In today's age and times, colleges offering the reputed computer science education programmes are mushrooming across India. One can't help but wonder, where did it all begin ?

Back in 1963, under the leadership of economist John Kenneth Galbraith, IIT Kanpur was the first institute in India to offer Computer Science education. The very first computer courses started at IIT Kanpur in the month of August 1963 on an IBM 1620 system.

IIT- Kanpur's SIIC is a godsend for start-up entrepreneurs

In a bid to foster innovation, research, and entrepreneurial activities of tech-based areas, IIT Kanpur has set up the SIDBI Innovation and Incubation Centre (SIIC) in collaboration with the Small Industries development Bank of India (SIDBI). The start-up provides a platform to business newbies to develop their ideas into commercially viable products. Also, just FYI, do you happen to recognise the man in the picture? That's Narayan Murthy, Founder & Chairman of Infosys casually chilling at his alma-mater IIT-K.

India's first nano satellite 'Jugnu' developed at IIT-K

The institute is pegged to be the developer of India’s first nano satellite 'Jugnu'. It was designed and built by a team of students, working under the guidance of faculty members of the institute and scientists of Indian Space Research Organisation (ISRO). Jugnu was successfully launched in orbit on 12 October 2011 by ISRO's PSLV-C18.

First academic institute in India to have it's very own helicopter ferry service

IIT-K is touted as the first academic institution in the country to have a helicopter ferry service. The service was started by IIT-K on 1 June 2013 and is being run by Pawan Hans Helicopter Limited. In the initial phase, the ferry service only connects IIT Kanpur to the Lucknow airport, but the plans to extend it to New Delhi later, are already in motion.

At present, there are two flights daily, to-and-fro to Lucknow Airport with a travel time of 25 minutes. The ferry service provides access to the Lucknow Airport, which operates both international and domestic flights to all major cities and countries. IIT Kanpur is also said to have its own airstrip for Aeronautical Engineering students.

Few of the many firsts of IIT-Kanpur

Along with Jugnu, IIT-K boasts of many firsts to it's name. Some of these are: In 2021, IIT-K developed a portable soil testing device called 'Bhu Parikshak' that can detect soil in health in just 90 seconds through an embedded mobile application. The device is set to assist farmers for obtaining soil health parameters with recommended dose of fertilisers.

In July 2021, IIT Kanpur created the Swasa Oxyrise bottle. It's a portable device that can be carried anywhere to meet the emergency need of oxygen. The portable oxygen canister by IIT Kanpur was created to address shortage of oxygen during the pandemic. Reportedly, 10 litre of oxygen has been compressed in each 180 gram bottle.

How many of these did you know? download the Knocksense app for more such interesting stories!

Mon, 25 Jul 2022 20:22:00 -0500 en text/html https://www.knocksense.com/kanpur/hop-aboard-the-wiz-train-7-facts-about-iit-kanpur-you-probably-didnt-know
Killexams : PhD in Computer Science

The doctor of philosophy in computer science program at Northwestern University primarily prepares students to become expert independent researchers. PhD students conduct original transformational research in extant and emerging computer science topics. Students work alongside top researchers to advance the core CS fields from Theory to AI and Systems and Networking. In addition, PhD students have the opportunity to collaborate with CS+X faculty who are jointly appointed between CS and disciplines including business, law, economics, journalism, and medicine.

Degrees

Doctor of Philosophy (PhD) in Computer Science

Doctor of Philosophy (PhD) in Computer Engineering joint with the Department of Electrical and Computer Engineering

Doctor of Philosophy (PhD) in Computer Science and Learning Sciences joint with the School of Education and Social Policy (SESP)

Doctor of Philosophy (PhD) in Technology and Social Behavior joint with Computer Science and Communication

Joining a Track

Doctor of philosophy in computer science students follow the course requirements, qualifying test structure, and thesis process specific to one of five tracks:

Within each track, students explore many areas of interest, including programming languages, security and privacy and human-computer interaction.

Learn more about computer science research areas

Curriculum and Requirements

The focus of the CS PhD program is learning how to do research by doing research, and students are expected to spend at least 50% of their time on research. Students complete ten graduate curriculum requirements (including COMP_SCI 496: Introduction to Graduate Studies in Computer Science), and additional course selection is tailored based on individual experience, research track, and interests. Students must also successfully complete a qualifying test to be admitted to candidacy.

CS PhD ManualApply now

Request More Information

Download a PDF program guide about your program of interest and get in contact with our graduate admissions staff.

Request info about the PhD degree

Opportunities for PhD Students

Cognitive Science Certificate

Computer science PhD students may earn a specialization in cognitive science by taking six cognitive science courses. In addition to broadening a student’s area of study and improving their resume, students attend cognitive science events and lectures, they can receive conference travel support, and they are exposed to cross-disciplinary exchanges.

The Crown Family Graduate Internship Program

PhD candidates may elect to participate in the Crown Family Graduate Internship Program. This opportunity allows the doctoral candidate to gain practical experience in industry or in national research laboratories in areas closely related to their research.

Management for Scientists and Engineers Certificate Program

The certificate program — jointly offered by The Graduate School and Kellogg School of Management — provides post-candidacy doctoral students with a basic understanding of strategy, finance, risk and uncertainty, marketing, accounting and leadership. Students are introduced to business concepts and specific frameworks for effective management relevant to both for-profit and nonprofit sectors.

Career Paths

Recent graduates of the computer science PhD program are pursuing careers in industry & research labs, academia, and startups.

Academia

  • Georgia Institute of Technology
  • Illinois Institute of Technology
  • MIT
  • Northeastern
  • University of Pittsburgh
  • University of Rochester
  • University of Washington
  • Naval Research Laboratory
  • Northwestern University

Industry & Research Labs

  • Adobe Research
  • Apple
  • Facebook
  • Google
  • Intel
  • Microsoft
  • Narrative Science
  • Nokia
  • Oak Ridge National Laboratory
  • VMWare
  • Yahoo
Tue, 03 May 2022 22:32:00 -0500 en text/html https://www.mccormick.northwestern.edu/computer-science/academics/graduate/phd/
Killexams : Psychology Today Sat, 23 Jul 2022 12:00:00 -0500 en-US text/html https://www.psychologytoday.com/us Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they give one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on test scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would give Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't give off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could give rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can give rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot give you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Strengthen his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the actual salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : Weak Human, Strong Force: Applying Advanced Chess to Military AI

Gary Kasparov, one of the greatest chess players of all time, developed advanced chess after losing his 1997 match to IBM’s Deep Blue supercomputer. Advanced chess marries the computational precision of machine algorithms with the intuition of human beings. Similar in concept to manned-unmanned teaming or the “centaur model,” Kasparov’s experimentation has important implications for the military’s use of AI.

In 2005, a chess website hosted an advanced chess tournament open to any player. Extraordinarily, the winners of the tournament were not grandmasters and their machines, but two chess amateurs utilizing three different computers. Kasparov observed, “their skill at manipulating and ‘coaching’ their computers to look very deeply into the positions effectively counteracted the superior chess understanding of their Grandmaster opponents and the greater computational power of other participants.” Kasparov concluded that a “weak human + machine + better process was superior to a strong computer alone and … superior to a strong human + machine + inferior process.” This conclusion became known as Kasparov’s Law.

As the Department of Defense seeks to better use artificial intelligence, Kasparov’s Law can help design command-and-control architecture and Strengthen the training of the service members who will use it. Kasparov’s Law suggests that for human-machine collaboration to be effective, operators must be familiar with their machines and know how to best employ them. Future conflicts will not be won by the force with the highest computing power, most advanced chip design, or best tactical training, but by the force that most successfully employs novel algorithms to augment human decision-making. To achieve this, the U.S. military needs to identify, recruit, and retain people who not only understand data and computer logic, but who can also make full use of them. Military entrance exams, general military training, and professional military education should all be refined with this in mind.

Building a Better Process

Kasparov’s key insight was that building a “better process” requires an informed human at the human-machine interface. If operators do not understand the rules and the limitations of their AI partners, they will ask the wrong questions or command the wrong actions.

Kasparov’s “weak human” does not mean an inept or untrained one. The “weak human” understands the computer’s rules. The two amateurs that won the 2005 chess match used their knowledge of the rules to ask the right questions in the right way. The amateurs were not Grandmasters or experts with advanced strategies. But they were able to decipher the data their computers provided to unmask the agendas of their opponents and calculate the right moves. In other words, they used a computer to fill the role of a specialist or expert, and to inform their decision-making process.

The number and type of sensors that feed into global networks is growing rapidly. As in chess, algorithms can sift, sort, and organize intelligence data in order to make it easier for humans to interpret. AI algorithms can find patterns and probabilities while humans determine the contextual meaning to inform strategy. The critical question is how humans can best be positioned and trained to do this most effectively.

Familiarity and Trust

When human operators lack familiarity with AI-enhanced systems, they often suffer from either too little or too much confidence in them. Teaching military operators how to use AI properly requires teaching them a system’s limits and inculcating just the right level of trust. This is particularly crucial in life or death situations where human operators must decide when to turn off or override AI. The level of trust given to an AI is dependent on the maturity and proven performance of a system. When AI systems are in the design or testing phases, human operators should be particularly well-versed in their machine’s limitations and behavior so they can override it when needed. But this changes as the AI becomes more reliable.

Consider the introduction of the automatic ground collision avoidance system (auto-GCAS) into F-16 fighter jets. Adoption was stinted by nuisance “pull-ups,” when the AI unnecessarily took over the flight control system during early flight testing and fielding. The distrust this initially created among pilots was entirely understandable. As word spread throughout the F-16 community, many pilots began disabling the system altogether. But as the technology became more reliable, this distrust itself became a problem, preventing pilots from taking advantage of a proven life-saving algorithm. Now, newer pilots are far more trusting. Lieutenant David Alman, an Air National Guard pilot currently in flight training for the F-16, told the authors that “I think the average B-course student hugely prefers it [auto-GCAS].” In other words, once the system is proven, there is less need to train future aircrews as thoroughly in their machine’s behavior and teach them to trust it.

It took a number of policy mandates and personnel turnovers before F-16 pilots began to fly with auto-GCAS enabled during most missions. Today, the Defense Advanced Projects Agency and the U.S. Air Force are attempting to automate parts of aerial combat in their Air Combat Evolution program. In the program, trained pilots’ trust is evaluated when teamed with AI agents. One pilot was found to be disabling the AI agent before it had a chance to perform due to their preconceived distrust of the system. Such overriding behaviors negate the benefits that AI algorithms are designed to deliver. Retraining programs may help, but if a human operator continues to override their AI agents without cause, the military should be prepared to remove them from processes that contain AI interaction.

At the same time, overconfidence in AI can also be a problem. “Automation bias” or the over-reliance on automation occurs when users are unaware of the limits of their AI. In the crash of Air France 447, for example, pilots suffered from cognitive dissonance after the autopilot disengaged in a thunderstorm. They failed to recognize that the engine throttles, whose physical positions do not matter when autopilot is on, were set near idle power. As the pilots pulled back on the control stick, they expected the engines to respond with power as it does in normal autopilot throttle control. Instead, the engines slowly rolled back, and the aircraft’s speed decayed. Minutes later, Air France 447 pancaked into the Atlantic, fully stalled.

Identifying and Placing the Correct Talent

Correctly preparing human operators requires not only determining the maturity of the system but also differentiating between tactical and strategic forms of AI. In tactical applications, like airplanes or missile defense systems, timelines may be compressed beyond human reaction times, forcing the human to give full trust to a system and allow it to operate autonomously. In strategic or operational situations, by contrast, AI is attempting to derive adversary intent which encompasses broader timelines and more ambiguous data. As a result, analysts who depend on an AI’s output need to be familiar with its internal workings in order take advantage of its superior data processing and pattern-finding capabilities.

Consider the tactical applications of AI in air-to-air combat. Drones, for example, may operate in semi-autonomous or fully autonomous modes. In these situations, human operators must exercise control restraint, known as neglect benevolence, to allow their AI wingmen to function without interference. In piloted aircraft, AI pilot assist programs may be providing turn-by-turn queues to the pilot to defeat an incoming threat, not unlike turn-by-turn directions given by the Waze application to car drivers. Sensors around the fighter aircraft detect infrared, optical, and electromagnetic signatures, calculate the direction of arrival and guidance mode of the threat, and advise the pilot on the best course of action. In some cases, the AI pilot may even take control of the aircraft if human reaction time is too slow, as with the automatic ground collision avoidance systems. When timelines are compressed and the type of relevant data is narrow, human operators do not need to be as familiar with the system’s behavior, especially once its proven or certified. Without the luxury of time to judge or second-guess AI behavior, they simply need to know and trust its capabilities.

However, the requirements will be different as AI gradually begins to play a bigger role in strategic processes like intelligence collection and analysis. When AI is being used to aggregate a wider swath of seemingly disparate data, understanding its approach is crucial to evaluating its output. Consider the following scenario: An AI monitoring system scans hundreds of refinery maintenance bulletins and notices that several state-controlled oil companies in a hostile country announce plans to shut down refineries for “routine maintenance” during a particular period. Then, going through thousands of cargo manifests, it discovers that a number of outbound oil tankers from that country have experienced delays in loading their cargo. The AI then reports that the nation in question is creating the conditions for economic blackmail. At this point, a human analyst could best assess this conclusion if they knew what kinds of delays the system had identified, how unusual these forms of delays were, and whether there were other political or environmental factors that might explain them.

Next Steps

With untrained operators, the force-multiplying effects of AI can be negated by the very people they are designed to aid. To avoid this, algorithm-dominated warfare requires updates to the way the military sifts and sorts its talent.

Tests like the Navy’s Aviation Selection Test Battery, the Air Force’s Officer Qualification Test, or the universal Armed Services Vocational Aptitude Battery rate a candidate’s performance in a range of subject areas. With machines replacing certain kinds of human expertise, the military needs to screen for new skills, specifically the ability to understand machine systems, processes, and programming. Changing entry exams to test for data interpretation skills and an ability to understand machine logic would be a valuable first step. Google’s Developers certification or Amazon’s Web Services certification offer useful models that the military could adapt. The military should also reward recruits and service members for completing training in related fields from already-available venues such as massive open online courses.

For those already in the service, the Secretary of Defense should promote relevant skills by prioritizing competitive selection for courses specializing in understanding AI systems. Existing examples include Stanford University’s Symbolic Systems Program, the Massachusetts’s Institute of Technology AI Accelerator course, and the Naval Postgraduate School’s “Harnessing AI” course. The military could also develop new programs based out of institutions like the Naval Community College or the Naval Postgraduate School and build partnerships with civilian institutions that already offer high-quality education in artificial intelligence. Incorporating AI literacy into professional military education courses and offering incentives to take AI electives would help as well. The Air Force’s computer language initiative, now reflected in Section 241 of the 2021 National Defense Authorization Act, represents an important first step. Nascent efforts across the services need to be scaled up to offer commercially relevant professional learning opportunities at all points during the service member’s career.

Artificial intelligence is rapidly disrupting traditional analysis and becoming a force multiplier for humans, allowing them to focus on orchestration rather than the minutia of rote and repetitive tasks. AI may also displace some current specializations, freeing people for roles that are better suited for humans. Understanding Kasparov’s Law can help the military cultivate the right talent to fully take advantage of this shift.

Trevor Phillips-Levine is a naval aviator and the Navy’s Joint Close Air Support branch officer. He has co-authored several articles regarding autonomous or remotely piloted platforms, publishing with the Center for International Maritime Security, U.S. Naval Institute Proceedings magazine, and Modern Warfare Institute. He can be reached on LinkedIn or Twitter.

Michael Kanaan is a Chief of Staff of the United States Air Force fellow at Harvard Kennedy School. He is also the author of T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power. You can find him on LinkedIn and Twitter.

Dylan Phillips-Levine is a naval aviator and a senior editor for the Center for International Maritime Security.

Walker D. Mills is a Marine infantry officer currently serving as an exchange officer at the Colombian Naval Academy in Cartagena, Colombia. He is also a nonresident fellow at the Brute Krulak Center for Innovation and Modern War and a nonresident fellow with the Irregular Warfare Initiative. He has written numerous articles for publications like War on the RocksProceedings, and the Marine Corps Gazette.

Noah “Spool” Spataro is a division chief working Joint All Domain Command and Control assessments on the Joint Staff. His experiences traverse dual-use technology transition and requirements, standup and command of a remotely piloted aircraft squadron, and aviation command and control. He is a distinguished graduate of National Defense University’s College of Information and Cyberspace.

The positions expressed here are those of the authors and do not represent those of the Department of Defense or any part of the U.S. government.

Image: Public Domain

Wed, 06 Jul 2022 12:00:00 -0500 en-US text/html https://warontherocks.com/2022/07/weak-human-strong-force-applying-advanced-chess-to-military-ai/
Killexams : Startup-Circle: How the significance of meta verse is growing in india No result found, try new keyword!He talked about his startup journey and also, he speaks about the growth plans he has for ITLH. After seeing the recruiting and training processes of the IT industry, we realized that the skill ... Thu, 21 Jul 2022 12:00:00 -0500 text/html https://www.ciol.com/startup-circle-significance-meta-verse-growing-india/ Killexams : Industrial Engineering Bachelor of Science Degree Course Sem. Cr. Hrs. First Year CHMG-131

General Education – Elective: General Chemistry for Engineers

This rigorous course is primarily for, but not limited to, engineering students. subjects include an introduction to some basic concepts in chemistry, stoichiometry, First Law of Thermodynamics, thermochemistry, electronic theory of composition and structure, and chemical bonding. The lecture is supported by workshop-style problem sessions. Offered in traditional and online format. Lecture 3 (Fall, Spring).

3 ISEE-120

Fundamentals of Industrial Engineering

This course introduces students to industrial engineering and provides students with foundational tools used in the profession. The course is intended to prepare students for their first co-op experience in industrial engineering by exposing them to tools and concepts that are often encountered during early co-op assignments. The course covers specific tools and their applications, including systems design and the integration. The course uses a combination of lecture and laboratory activities. Projects and group exercises will be used to cover hands-on applications and problem-solving related to subjects covered in lectures. (This class is restricted to ISEE-BS, ENGRX-UND, or ISEEDU Major students.) Lecture 3 (Fall, Spring).

3 ISEE-140

Materials Processing

A study of the application of machine tools and fabrication processes to engineering materials in the manufacture of products. Processes covered include cutting, molding, casting, forming, powder metallurgy, solid modeling, engineering drawing, and welding. Students make a project in the lab portion of the course. (This class is restricted to ISEE-BS, ENGRX-UND, or ISEEDU Major students.) Lab 1 (Fall).

3 MATH-181

General Education – Mathematical Perspective A: Project-Based Calculus I

This is the first in a two-course sequence intended for students majoring in mathematics, science, or engineering. It emphasizes the understanding of concepts, and using them to solve physical problems. The course covers functions, limits, continuity, the derivative, rules of differentiation, applications of the derivative, Riemann sums, definite integrals, and indefinite integrals. (Prerequisite: A- or better in MATH-111 or A- or better in ((NMTH-260 or NMTH-272 or NMTH-275) and NMTH-220) or a math placement test score greater than or equal to 70 or department permission to enroll in this class.) Lecture 6 (Fall, Spring, Summer).

4 MATH-182

General Education – Mathematical Perspective B: Project-Based Calculus II

This is the second in a two-course sequence intended for students majoring in mathematics, science, or engineering. It emphasizes the understanding of concepts, and using them to solve physical problems. The course covers techniques of integration including integration by parts, partial fractions, improper integrals, applications of integration, representing functions by infinite series, convergence and divergence of series, parametric curves, and polar coordinates. (Prerequisites: C- or better in (MATH-181 or MATH-173 or 1016-282) or (MATH-171 and MATH-180) or equivalent course(s).) Lecture 6 (Fall, Spring, Summer).

4 PHYS-211

General Education – Scientific Principles Perspective: University Physics I

This is a course in calculus-based physics for science and engineering majors. subjects include kinematics, planar motion, Newton's Laws, gravitation, work and energy, momentum and impulse, conservation laws, systems of particles, rotational motion, static equilibrium, mechanical oscillations and waves, and data presentation/analysis. The course is taught in a workshop format that integrates the material traditionally found in separate lecture and laboratory courses. (Prerequisites: C- or better in MATH-181 or equivalent course. Co-requisites: MATH-182 or equivalent course.) Lec/Lab 6 (Fall, Spring).

4 YOPS-010

RIT 365: RIT Connections

RIT 365 students participate in experiential learning opportunities designed to launch them into their career at RIT, support them in making multiple and varied connections across the university, and immerse them in processes of competency development. Students will plan for and reflect on their first-year experiences, receive feedback, and develop a personal plan for future action in order to develop foundational self-awareness and recognize broad-based professional competencies. Lecture 1 (Fall, Spring).

0  

General Education – First Year Writing (WI)

3  

General Education – Artistic Perspective

3  

General Education – Ethical Perspective

3  

General Education – Elective

3 Second Year EGEN-99

Engineering Co-op Preparation

This course will prepare students, who are entering their second year of study, for both the job search and employment in the field of engineering. Students will learn strategies for conducting a successful job search, including the preparation of resumes and cover letters; behavioral interviewing techniques and effective use of social media in the application process. Professional and ethical responsibilities during the job search and for co-op and subsequent professional experiences will be discussed. (This course is restricted to students in Kate Gleason College of Engineering with at least 2nd year standing.) Lecture 1 (Fall, Spring).

0 ISEE-200

General Education – Elective: Computing for Engineers

This course aims to help undergraduate students in understanding the latest software engineering techniques and their applications in the context of industrial and systems engineering. The subjects of this course include the fundamental concepts and applications of computer programming, software engineering, computational problem solving, and statistical techniques for data mining and analytics. (This class is restricted to ISEE-BS, ENGRX-UND, or ISEEDU Major students.) Lecture 3 (Spring).

3 ISEE-325

Engineering Statistics and Design of Experiments

This course covers statistics for use in engineering as well as the primary concepts of experimental design. The first portion of the course will cover: Point estimation; hypothesis testing and confidence intervals; one- and two-sample inference. The remainder of the class will be spent on concepts of design and analysis of experiments. Lectures and assignments will incorporate real-world science and engineering examples, including studies found in the literature. (Prerequisites: STAT-251 or MATH-251 or equivalent course.) Lecture 3 (Fall, Spring).

3 ISEE-345

Engineering Economy

Time value of money, methods of comparing alternatives, depreciation and depletion, income tax consideration and capital budgeting. Cannot be used as a professional elective for ISE majors. Course provides a foundation for engineers to effectively analyze engineering projects with respect to financial considerations. Lecture 3 (Fall, Spring).

3 ISEE-499

Co-op (summer)

One semester of paid work experience in industrial engineering. (Prerequisites: ISEE-120 and EGEN-99 and students in the ISEE-BS program.) CO OP (Fall, Spring, Summer).

0 MATH-221

General Education – Elective: Multivariable and Vector Calculus

This course is principally a study of the calculus of functions of two or more variables, but also includes a study of vectors, vector-valued functions and their derivatives. The course covers limits, partial derivatives, multiple integrals, Stokes' Theorem, Green's Theorem, the Divergence Theorem, and applications in physics. Credit cannot be granted for both this course and MATH-219. (Prerequisite: C- or better MATH-173 or MATH-182 or MATH-182A or equivalent course.) Lecture 4 (Fall, Spring, Summer).

4 MATH-233

General Education – Elective: Linear Systems and Differential Equations

This is an introductory course in linear algebra and ordinary differential equations in which a scientific computing package is used to clarify mathematical concepts, visualize problems, and work with large systems. The course covers matrix algebra, the basic notions and techniques of ordinary differential equations with constant coefficients, and the physical situation in which they arise. (Prerequisites: MATH-172 or MATH-182 or MATH-182A and students in CHEM-BS or CHEM-BS/MS or ISEE-BS programs.) Lecture 4 (Spring).

4 MATH-251

General Education – Elective: Probability and Statistics

This course introduces demo spaces and events, axioms of probability, counting techniques, conditional probability and independence, distributions of discrete and continuous random variables, joint distributions (discrete and continuous), the central limit theorem, descriptive statistics, interval estimation, and applications of probability and statistics to real-world problems. A statistical package such as Minitab or R is used for data analysis and statistical applications. (Prerequisites: MATH-173 or MATH-182 or MATH 182A or equivalent course.) Lecture 3 (Fall, Spring, Summer).

3 MECE-200

Fundamentals of Mechanics

Statics: equilibrium, the principle of transmissibility of forces, couples, centroids, trusses and friction. Introduction to strength of materials: axial stresses and strains, statically indeterminate problems, torsion and bending. Dynamics: dynamics of particles and rigid bodies with an introduction to kinematics and kinetics of particles and rigid bodies, work, energy, impulse momentum and mechanical vibrations. Emphasis is on problem solving. For students majoring in industrial and systems engineering. (Prerequisites: PHYS-211 or PHYS-211A or 1017-312 or 1017-312T or 1017-389 or PHYS-206 and PHYS-207 or equivalent course.and restricted to students in ISEE-BS or ISEEDU-BS programs.) Lecture 4 (Spring).

4 PHYS-212

General Education – Natural Science Inquiry Perspective: University Physics II

This course is a continuation of PHYS-211, University Physics I. subjects include electrostatics, Gauss' law, electric field and potential, capacitance, resistance, DC circuits, magnetic field, Ampere's law, inductance, and geometrical and physical optics. The course is taught in a lecture/workshop format that integrates the material traditionally found in separate lecture and laboratory courses. (Prerequisites: (PHYS-211 or PHYS-211A or PHYS-206 or PHYS-216) or (MECE-102, MECE-103 and MECE-205) and (MATH-182 or MATH-172 or MATH-182A) or equivalent courses. Grades of C- or better are required in all prerequisite courses.) Lec/Lab 6 (Fall, Spring).

4  

General Education – Global Perspective

3  

General Education – Social Perspective

3 Third Year ISEE-301

Operations Research

An introduction to optimization through mathematical programming and stochastic modeling techniques. Course subjects include linear programming, transportation and assignment algorithms, Markov Chain queuing and their application on problems in manufacturing, health care, financial systems, supply chain, and other engineering disciplines. Special attention is placed on sensitivity analysis and the need of optimization in decision-making. The course is delivered through lectures and a weekly laboratory where students learn to use state-of-the-art software packages for modeling large discrete optimization problems. (Prerequisites: MATH-233 or (MATH-231 and MATH-241) or equivalent course.) Lab 2 (Spring).

4 ISEE-304

Fundamentals of Materials Science

This course provides the student with an overview of structure, properties, and processing of metals, polymers, ceramics and composites. There is a particular emphasis on understanding of materials and the relative impact on manufacturing optimization throughput and quality as it relates to Industrial Engineering. This course is delivered through lectures and a weekly laboratory. (This course is restricted to ISEE-BS Major students.) Lab 2 (Spring).

3 ISEE-323

Systems and Facilities Planning

A basic course in quantitative models on layout, material handling, and warehousing. subjects include product/process analysis, flow of materials, material handling systems, warehousing and layout design. A computer-aided layout design package is used. (Corequisites: ISEE-301 or equivalent course.) Lab 2 (Spring).

3 ISEE-330

Ergonomics and Human Factors (WI-PR)

This course covers the physical and cognitive aspects of human performance to enable students to design work places, procedures, products and processes that are consistent with human capabilities and limitations. Principles of physical work and human anthropometry are studied to enable the student to systematically design work places, processes, and systems that are consistent with human capabilities and limitations. In addition, the human information processing capabilities are studied, which includes the human sensory, memory, attention and cognitive processes; display and control design principles; as well as human computer interface design. (Co-requisites: ISEE-325 or STAT-257 or MATH-252 or equivalent course.) Lecture 4 (Spring).

4 ISEE-350

Engineering Management

Development of the fundamental engineering management principles of industrial enterprise, including an introduction to project management. Emphasis is on project management and the development of the project management plan. At least one term of previous co-op experience is required. (Prerequisite: BIME-499 or MECE-499 or ISEE-499 or CHME-499 or EEEE-499 or CMPE-499 or MCEE-499 or equivalent course.) Lecture 3 (Spring).

3 ISEE-499

Co-op (fall, summer)

One semester of paid work experience in industrial engineering. (Prerequisites: ISEE-120 and EGEN-99 and students in the ISEE-BS program.) CO OP (Fall, Spring, Summer).

0 Fourth Year ISEE-420

Production Planning/Scheduling

A first course in mathematical modeling of production-inventory systems. subjects included: Inventory: Deterministic Models, Inventory: Stochastic Models, Push v. Pull Production Control Systems, Factory Physics, and Operations Scheduling. Modern aspects such as lean manufacturing are included in the context of the course. (Prerequisites: ISEE-301 and (STAT-251 or MATH-251) or equivalent course.) Lecture 3 (Fall).

3 ISEE-499

Co-op (summer)

One semester of paid work experience in industrial engineering. (Prerequisites: ISEE-120 and EGEN-99 and students in the ISEE-BS program.) CO OP (Fall, Spring, Summer).

0 ISEE-510

Systems Simulation

Computer-based simulation of dynamic and stochastic systems. Simulation modeling and analysis methods are the focus of this course. A high-level simulation language such as Simio, ARENA, etc., will be used to model systems and examine system performance. Model validation, design of simulation experiments, and random number generation will be introduced. (Prerequisites: ISEE-200 and ISEE-301 or equivalent course. Co-requisites: ISEE-325 or STAT-257 or MATH-252 or equivalent course.) Lecture 3 (Fall, Spring).

3 ISEE-560

Applied Statistical Quality Control

An applied approach to statistical quality control utilizing theoretical tools acquired in other math and statistics courses. Heavy emphasis on understanding and applying statistical analysis methods in real-world quality control situations in engineering. subjects include process capability analysis, acceptance sampling, hypothesis testing and control charts. Contemporary subjects such as six-sigma are included within the context of the course. (Prerequisites: ISEE-325 or STAT-257 or MATH-252 or equivalent course and students in ISEE-BS or ISEE-MN or ENGMGT-MN programs.) Lecture 3 (Fall).

3 ISEE-760

Design of Experiments

This course presents an in-depth study of the primary concepts of experimental design. Its applied approach uses theoretical tools acquired in other mathematics and statistics courses. Emphasis is placed on the role of replication and randomization in experimentation. Numerous designs and design strategies are reviewed and implications on data analysis are discussed. subjects include: consideration of type 1 and type 2 errors in experimentation, demo size determination, completely randomized designs, randomized complete block designs, blocking and confounding in experiments, Latin square and Graeco Latin square designs, general factorial designs, the 2k factorial design system, the 3k factorial design system, fractional factorial designs, Taguchi experimentation. (Prerequisites: ISEE-325 or STAT-252 or MATH-252 or equivalent course or students in ISEE-MS, ISEE-ME, SUSTAIN-MS, SUSTAIN-ME or ENGMGT-ME programs.) Lecture 3 (Spring).

3  

Professional Electives

6  

Open Electives

9  

Professional Elective/Engineering Management Elective

3  

General Education – Immersion 1, 2

6 Fifth Year ACCT-794

Cost Management in Technical Organizations

A first course in accounting for students in technical disciplines. subjects include the distinction between external and internal accounting, cost behavior, product costing, profitability analysis, performance evaluation, capital budgeting, and transfer pricing. Emphasis is on issues encountered in technology intensive manufacturing organizations. *Note: This course is not intended for Saunders College of Business students. (Enrollment in this course requires permission from the department offering the course.) Lecture 3 (Spring).

3 ISEE-497

Multidisciplinary Senior Design I

This is the first in a two-course sequence oriented to the solution of real world engineering design problems. This is a capstone learning experience that integrates engineering theory, principles, and processes within a collaborative environment. Multidisciplinary student teams follow a systems engineering design process, which includes assessing customer needs, developing engineering specifications, generating and evaluating concepts, choosing an approach, developing the details of the design, and implementing the design to the extent feasible, for example by building and testing a prototype or implementing a chosen set of improvements to a process. This first course focuses primarily on defining the problem and developing the design, but may include elements of build/ implementation. The second course may include elements of design, but focuses on build/implementation and communicating information about the final design. (Prerequisites: ISEE-323 and ISEE-330 or equivalent course. Co-requisites: ISEE-350 and ISEE-420 and ISEE-510 and ISEE-560 or equivalent course.) Lecture 3 (Fall, Spring, Summer).

3 ISEE-498

Multidisciplinary Senior Design II

This is the second in a two-course sequence oriented to the solution of real world engineering design problems. This is a capstone learning experience that integrates engineering theory, principles, and processes within a collaborative environment. Multidisciplinary student teams follow a systems engineering design process, which includes assessing customer needs, developing engineering specifications, generating and evaluating concepts, choosing an approach, developing the details of the design, and implementing the design to the extent feasible, for example by building and testing a prototype or implementing a chosen set of improvements to a process. The first course focuses primarily on defining the problem and developing the design, but may include elements of build/ implementation. This second course may include elements of design, but focuses on build/implementation and communicating information about the final design. (Prerequisites: ISEE-497 or equivalent course.) Lecture 3 (Fall, Spring).

3 ISEE-561

Linear Regression Analysis

In any system where parameters of interest change, it may be of interest to examine the effects that some variables exert (or appear to exert) on others. "Regression analysis" actually describes a variety of data analysis techniques that can be used to describe the interrelationships among such variables. In this course we will examine in detail the use of one popular analytic technique: least squares linear regression. Cases illustrating the use of regression techniques in engineering applications will be developed and analyzed throughout the course. (Prerequisites: (MATH-233 or (MATH-231 and MATH-241)) and (ISEE-325 or STAT-257 or MATH-252) or equivalent courses and students in ISEE-BS programs.) Lecture 3 (Fall).

3 ISEE-750

Systems and Project Management

This course ensures progress toward objectives, proper deployment and conservation of human and financial resources, and achievement of cost and schedule targets. The focus of the course is on the utilization of a diverse set of project management methods and tools. subjects include strategic project management, project and organization learning, chartering, adaptive project management methodologies, structuring of performance measures and metrics, technical teams and project management, risk management, and process control. Course delivery consists of lectures, speakers, case studies, and experience sharing, and reinforces collaborative project-based learning and continuous improvement. (Prerequisites: ISEE-350 or equivalent course or students in ISEE BS/MS, ISEE BS/ME, ISEE-MS, SUSTAIN-MS, ENGMGT-ME, PRODDEV-MS, MFLEAD-MS, or MIE-PHD programs.) Lecture 3 (Fall).

3 ISEE-771

Engineering of Systems I

The engineering of a system is focused on the identification of value and the value chain, requirements management and engineering, understanding the limitations of current systems, the development of the overall concept, and continually improving the robustness of the defined solution. EOS I & II is a 2-semester course sequence focused on the creation of systems that generate value for both the customer and the enterprise. Through systematic analysis and synthesis methods, novel solutions to problems are proposed and selected. This first course in the sequence focuses on the definition of the system requirements by systematic analysis of the existing problems, issues and solutions, to create an improved vision for a new system. Based on this new vision, new high-level solutions will be identified and selected for (hypothetical) further development. The focus is to learn systems engineering through a focus on an actual artifact (This course is restricted to students in the ISEE BS/MS, ISEE BS/ME, ISEE-MS, SUSTAIN-MS, PRODDEV-MS, MFLEAD-MS, ENGMGT-ME, or MIE-PHD programs or those with 5th year standing in ISEE-BS or ISEEDU-BS.) Lecture 3 (Fall, Spring).

3 Choose one of the following:

3

   ISEE-792

   Engineering Capstone

Students must investigate a discipline-related subject in a field related to industrial and systems engineering, engineering management, sustainable engineering, product development, or manufacturing leadership. The general intent of the engineering capstone is to demonstrate the students' knowledge of the integrative aspects of a particular area. The capstone should draw upon skills and knowledge acquired in the program. (This course is restricted to students in ISEE-MS, ENGMGT-ME, SUSTAIN-MS, PRODDEV-MS, MFLEAD-MS or the ISEE BS/MS programs.) Lecture 3 (Fall, Spring).

     ISEE-794

   Leadership Capstone plus 1 additional Engineering Elective

For students enrolled in the BS/ME dual degree program. Student must either: 1) serve as a team leader for the multidisciplinary senior design project, where they must apply leadership, project management, and system engineering skills to the solution of unstructured, open-ended, multi-disciplinary real-world engineering problems, or 2) demonstrate leadership through the investigation of a discipline-related topic. (Enrollment in this course requires permission from the department offering the course.) Seminar (Fall, Spring).

   

Engineering Management Electives

6  

General Education – Immersion 3

3 Total Semester Credit Hours

150

Sat, 12 Feb 2022 20:55:00 -0600 en text/html https://www.rit.edu/study/industrial-engineering-bs
Killexams : InsideAIML Launches 100% Job certain Program with AskTalos

InsideAIML, a dedicated platform for AI related career transformation programs, announced a Master’s in Artificial Intelligence program with 100% Job Guarantee, jointly with AskTalos.com.

Under the InsideAIML - AskTalos Partnership, the master’s program will use AskTalos’ technology, tools, and experts from the AI domain; and InsideAIML’s curriculum, certification, and blended learning approach.

To provide practical, industry-aligned learning, the students will use a combination of self-paced videos and live virtual classes with extensive hands-on exposure with the AskTalos expert's team. While this program is in collaboration with AskTalos, the AskTalos team will provide industry-relevant POC work while they are learning.

The course will be offered at a minimum cost, and class size will be limited to one to two teachers and instructors for each of the 30 to 40 candidates.

InsideAIML guarantees that if a student follows all deadlines, but is still unable to find work six months after completing the course, he or she will receive a refund of the program money.

"We prepare students for the industry, with our partnership with over 500+ hiring partners. InsideAIML has connected with Addle India, Mindtree, Amazon, Flipkart, Microsoft, Zomato, Accenture, Adobe, One 97 Communications, Cisco, IBM, L&T Infotech, TCS, Morgan Stanley, JP Morgan, Goldman Sachs, HP, EY for placement” said Vikram Bhakre, CTO of InsideAIML.

He also stated, "We recognize the anxiety and concerns that learners may have about upskilling during their careers - will they be able to find a new job? Will the new abilities aid them in their existing position? Given the most prevalent concerns, we wish to alleviate their worries with the InsideAIML Job certain Program, which allows aspirants to study and develop their skills while receiving a job certain upon completion of the program."

Manish Kumar, who found a job as a Project Manager in Crestron through InsideAIML said "While working in a non-technical field, I realized that in order to move up in my career, I needed to upskill. I decided to enroll in the InsideAIML Master's in AI with a 100 percent Job certain program. After this course, I was not only equipped with new skills in a very sought-after technology but also got a job as a Project Manager in Credit Suisse."

Graduates from any discipline are eligible to apply for the Master's in AI with the Job certain Program. While prior work experience is not required for these programs, applicants must have received 55 percent or above in all of their classes throughout high school, college, and graduation.

So, whether you're already employed but looking for better opportunities, or looking to join one of the tech giants to build a solid career foundation, InsideAIML Job certain Program with 100% assured can help you not only get started but also catapult you to greater heights.

Disclaimer: This article is a paid publication and does not have journalistic/editorial involvement of Hindustan Times. Hindustan Times does not endorse/subscribe to the content(s) of the article/advertisement and/or view(s) expressed herein. Hindustan Times shall not in any manner, be responsible and/or liable in any manner whatsoever for all that is stated in the article and/or also with regard to the view(s), opinion(s), announcement(s), declaration(s), affirmation(s) etc., stated/featured in the same.

Thu, 07 Jul 2022 01:08:00 -0500 en text/html https://www.hindustantimes.com/brand-stories/insideaiml-launches-100-job-guarantee-program-with-asktalos-101657175834785.html
Killexams : How FYERS is transforming the trading & investment ecosystem in India No result found, try new keyword!Also, he speaks about the growth plans he has for FYERS. Tell us about FYERS and your offerings. Tell us about the team. FYERS is a technology-led stockbroking and investment company. We equally ... Sun, 17 Jul 2022 11:59:00 -0500 text/html https://www.ciol.com/fyers-transform-trading-investment-ecosystem-india/
C9510-052 exam dump and training guide direct download
Training Exams List