Read these CPIM-MPR questions and answers before the actual test

Try not to download and squander your precious energy on free CPIM-MPR cram that are given on the web. Those are out of date and obsolete stuff. Visit killexams.com to download 100 percent free questions and answers before you register for a complete duplicate of CPIM-MPR question bank containing actual test CPIM-MPR questions answers and VCE practice test. Peruse and Pass. No exercise in futility and cash.

Exam Code: CPIM-MPR Practice exam 2022 by Killexams.com team
Certified in Production and Inventory Management - Master Planning of Resources
APICS Production thinking
Killexams : APICS Production thinking - BingNews https://killexams.com/pass4sure/exam-detail/CPIM-MPR Search results Killexams : APICS Production thinking - BingNews https://killexams.com/pass4sure/exam-detail/CPIM-MPR https://killexams.com/exam_list/APICS Killexams : APICS Basics of Supply Chain Management

The American Production and Inventory Control Society (APICS) was founded in 1957 for the purpose of “building and validating knowledge in supply chain and operations management.” Today, APICS is an international organization with over 40,000 members that provides training and educational opportunities in the form of professional certifications, professional courses, workshops and resource materials for supply chain management professionals. One of the certifications offered by APICS is the CSCP, or Certified Supply Chain Professional. The certification is often required by employers for key personnel in charge of managing the production and distribution of their products.

Definition of Supply Chain Management

While supply chain management incorporates logistics, its scope is far greater.
  1. A supply chain is a system of organizations, people, technologies, activities, information and resources involved in moving materials, products and services all the way through the manufacturing process, from the original supplier of materials supplier to the end customer. Supply chain management is the supply and demand management of these materials, products and services within and across companies. This includes the oversight of products as they move from supplier to manufacturer to wholesaler to retailer to consumer. Some companies use the term "logistics" interchangeably with "supply chain management," while others distinguish between the two terms. The distinction is that supply chain management does not just oversee the tracking of materials or products through shipment, but spans all movement and storage of raw materials, works-in-process, finished goods and inventory from the point of origin to the point of consumption. It involves the coordination of processes and activities with and across other business operations into a cohesive and high-performing business model.

Strategies

Storing large amounts of inventory is expensive and can expose a company to losses.
  1. The ultimate goal of a successful supply chain management strategy is to insure that products are available when they are needed, thereby reducing the need to store large amounts of inventory. Supply chain management strategies must incorporate the distribution network configuration. Distribution networks consist of the number and location of suppliers, production facilities, distribution centers, warehouses and customers. These must be integrated with all the information systems that process the transfer of goods and materials, including forecasting, inventory and transportation.

Supply Chain Operational Flows

While there are only three primary operational flows, supply chain management can be extremely complex.
  1. Supply chain management oversees three primary flows. Product flow involves the movement of goods and materials through the manufacturing process from suppliers through consumers. Information flow involves the transmitting of orders and the tracking of goods and products through delivery. Financial flow consists of payment schedules, credit terms, consignments and title ownership agreements.

Learning the Basics from APICS

APICS will assist you in determining which of their programs best suits your needs.
  1. APICS’s Basics of Supply Chain Management is an online course that is designed to prepare you for the BSCM exam. APICS also offers several course options on supply chain management in preparation for certification. What APICS calls "Foundational Courses" are not for individuals seeking certification, but rather for those who want to develop skills and knowledge on supply chain and operations management. "Certification Review Courses" are designed for those seeking CSCP designations. Workshops are offered for continuing education. Continuing education is a requirement of maintaining CSCP certification, which must be renewed every five years. APICS also publishes several manuals that provide an overview of the curriculum, test specifications, test-taking advice, key terminology and demo questions with their answers.

Sat, 15 Aug 2020 11:02:00 -0500 en-US text/html https://smallbusiness.chron.com/apics-basics-supply-chain-management-41462.html
Killexams : What You Should Know before Deploying ML in Production

Transcript

Lazzeri: I'm Francesca Lazzeri. I'm a Principal Data Scientist Manager at Microsoft. I'm also Professor of Machine Learning and AI at Columbia University. In this session, we're going to learn together what you should know before deploying machine learning in production. There are many different limitations and opportunities during the machine learning lifecycle. MLOps, that stands for machine learning operations can help data scientists and engineers overcome these limitations and actually see them as opportunities. The importance of MLOps is due to the following reasons. First of all, machine learning models rely on a huge amount of data, and so it is very difficult for data scientists but also engineers to keep track of all of them. It is also challenging to keep track of the different parameters that we tweak in machine learning models. As you know, small changes can lead to very big differences in the results that you get from your machine learning models. We also have to keep track of the features that the model works with. Feature engineering is another very important part of the machine learning lifecycle and can really impact your model accuracy. Then, monitoring a machine learning model is not really like monitoring, deploying the software or a web app. Debugging a machine learning model is actually very complicated, very complex type of work, because models rely on real world data for predicting. As real world data changes, it is important to track your model and also update your model. This means that we have to keep track of new data changes and make sure that the model learns from them.

What You Should Know Before Deploying ML in Production

What are the four different aspects that you should know before deploying machine learning in production? We're going to look at four key aspects. These are different MLOps capabilities, open source integration, machine learning pipelines, and MLflow.

MLOps Capabilities

Let's start with MLOps capabilities. There are many different MLOps capabilities. The most important MLOps capabilities are first of all the capability of creating reproducible machine learning pipelines. Machine learning pipelines allows you to define repeatable and reusable steps for your data preparation, training, and scoring processes. It's also important to create reusable software environments for training and deploying models. Then, register, package, and deploy models from anywhere is a very important MLOps capability. You need also to track the associated metadata that are required to use the model. Then capture the governance data for the end-to-end machine learning lifecycle is another important aspect. Here, in this case, the longer lineage information can include, for example, who is publishing the model. Why changes were made at some point. When different models were deployed or used in production. It is also important to notify and alert on events in the machine learning lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection, these are all important notifications that you should have. Then, monitor machine learning applications for operational and ML related issues. Here, it is important for data scientists to compare model inputs between training and inference, for example. Also explore model specific metrics, and provide monitoring and alerts on your machine learning infrastructure. Finally, the last MLOps capability that I think is extremely important is the option of automating the end-to-end ML lifecycle with different machine learning pipelines. Using pipelines allows you to frequently update models, also test new models, and also roll out new models alongside with your other AI applications and services.

Open Source Integration

Then, there is the second aspect that you should know before deploying machine learning in production. This is about open source integration. Here, there are three different steps that I think are extremely important when you think about open source integration. These are the option of training open source machine learning models, which is great for accelerating your machine learning solutions. Frameworks for interpretable and fair models. These are open source frameworks. Finally, there are different open source tools for model deployment. Let's start with train open source machine learning models. There are many different open source frameworks. Here, I listed only three of them that are PyTorch, TensorFlow, and RAY. These are the three open source frameworks that I use the most.

PyTorch is an end-to-end machine learning framework, and it includes what we call TorchServe, which is an easy to use tool for deploying PyTorch models at scale. What is nice of PyTorch is that there is mobile deployment support and also cloud platform support. It's very nice and useful to use. Finally, the last thing that I want to mention about PyTorch is also this C++ frontend support. This frontend is a pure C++ interface to PyTorch that follow the design and the architecture of Python frontend. The other framework is TensorFlow.

TensorFlow is another end-to-end machine learning framework that is very popular in the industry. What I really like of TensorFlow is the option of using TensorFlow Extended that is an end-to-end platform for preparing data, training, validating, and also deploying machine learning models in large production environments. TensorFlow Extended pipeline is a sequence of components that implement a machine learning pipeline, which is specifically designed for scalable and high performance machine learning tasks. This is another great option that you have.

The last option that I want to mention is RAY. RAY is for reinforcement learning type of scenario. These packages will be the following libraries that I listed here. There is Tune, RLlib, and Train, and Dataset. Tune is great for hyperparameter tuning. RLlib is used for reinforcement learning. Train is for distributed deep learning. Then we have Dataset which is for distributed data loading. The other two libraries that I want to mention for RAY are Serve and Workflows. These are libraries that are great at taking your machine learning models and distributed apps to production.

In terms of open source integration, there are other two open source frameworks that you should be aware of. These are frameworks for interpretable and fair models. There is InterpretML and Fairlearn. InterpretML is an open source package that incorporates machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and also explain blackbox systems. Moreover, it helps you understand your model's global behavior, or understand the reason behind individual predictions. Again, it is a great option when you have to build interpretable machine learning models. The other framework is Fairlearn. Fairlearn is a Python package that has two main components that I use most of the time. Those components are metrics for assessing which groups are negatively impacted by a model, and for comparing also multiple models in terms of their use of fairness and accuracy metrics. The other component is algorithms. This is great because you have different algorithms for mitigating unfairness in a variety of AI and machine learning tasks, and also with different fairness definitions.

Model Deployment - ONNX

Finally, the third aspect under the open source integration is about model deployment. When working with different frameworks and tools, it means that you have to deploy models according to the framework's requirement. In order to standardize this process, you can use what we call ONNX format. ONNX stands for Open Neural Network Exchange. ONNX is an open source format for artificial intelligence models, or for machine learning models. ONNX supports the interoperability between frameworks. This means that you can train a model in one of the many popular machine learning frameworks, for example, PyTorch, TensorFlow, and RAY. You can convert it into ONNX format and you can consume the ONNX model in different frameworks, for example, in ml.net.

Specifically, there is ONNX Runtime. What is ONNX Runtime? ONNX is an open source format that is built to represent machine learning models. What is nice of ONNX is that it defines a common set of operators, the building blocks of machine learning and deep learning models, and then a common file format to enable data scientists and AI developers to use models with a variety of different frameworks, tools, runtimes, and compilers. ONNX Runtime, that is ORT, is great at optimizing and accelerating machine learning inferencing and training. You can, for example, train in Python, deploying with C#, Java, JavaScript, and many more. If you have specific questions about how to use ONNX and ONNX Runtime on Azure, feel free to contact Cassie Breviu. She is a fantastic product manager at Microsoft. She's always looking for scenarios on how data scientists and machine learning engineers are using ONNX and ONNX Runtime.

The other nice aspect of leveraging ONNX Runtime is the inference option. Of course, ONNX Runtime inference can enable faster customer experiences and also lower your cost, which is great. It supports models from deep learning frameworks such as PyTorch, and TensorFlow, but also classical machine learning libraries, such as Scikit-learn. There are many different examples of use cases for ONNX Runtime inferencing. Some of them, for example, is the fact that it improves the inference performance for a wide variety of machine learning models. It runs on different hardware and operating systems. You can train in Python. For example, you can deploy into C#, C++, Java app. Finally, you can train and perform inference with models created in different frameworks. All of these represent excellent use cases and reasons of why you should use and explore ONNX and ONNX Runtime.

There are many different popular frameworks that support conversion to ONNX. For some of these, for example PyTorch, ONNX format export is built in. For others like TensorFlow or Keras, there are separate installable packages that you can handle in order to process this conversion. Here, there are some examples of model conversion. The process is very straightforward. First of all, you need to get a model. This model can be trained from any framework that support export and conversion to ONNX format. Then you need to load and run the model with ONNX Runtime. Then, the third step is about tuning performance using various runtime configurations or hardware accelerators. This is in order to optimize your model and to tune performance.

Machine Learning Pipelines

The third aspect that you should know before deploying machine learning in production is about machine learning pipelines and how you can build these pipelines for your machine learning solution. Machine learning pipelines should focus on machine learning tasks such as data preparation, including importing, validating, and cleaning, transformation, normalization, and staging of your data. Then, there is training configuration including parameterizing arguments, file paths, and logging, reporting configuration. Then there is the training and validating in a way that is efficient and also repeatable. Efficiency might come from specific data subsets, different hardware, compute resources, distributed processing, and also progress monitoring. Finally, there is the deployment step that is about including versioning, scaling, provisioning, and access control. One of the questions that I get most of the time is, which pipeline technology should I use? Here, I list the three different scenarios. There is the model orchestration that is about machine learning model. Then we have data orchestration that is about data preparation. Then you have code and application orchestration.

Let's start from the first one. Here we have model orchestration. The primary persona is a data scientist. In terms of open source options, we have Kubeflow pipelines that you can leverage. The canonical pipe is from data to model. Then we have data orchestration that is about data preparation. The primary persona is a data engineer. In terms of open source offers, we have Apache Airflow. The canonical pipeline here is data to data. Finally, the third scenario that I found very popular is code and application orchestration. Here, the primary persona is an app developer. The canonical pipeline here is code plus model, to an app and a service.

When you create and run a pipeline object, the following high level steps occur. This is an example of a pipeline object that is created on Azure Machine Learning. For each step, the service calculates requirements for the hardware, compute resources, OS resources, for example, Docker Images, software resources, for example, Conda, and data input. Then the service determines the dependencies between steps, resulting in a very dynamic execution graph. When each node in the execution graph runs, the service configures the necessary hardware and software environment. Then the step runs providing logging and monitoring information to its containing experiment object. When the step completes, its outputs are prepared as inputs to the next step. Finally, the resources that are no longer needed are finalized and also detached.

MLflow

The final tool that you should consider before deploying machine learning in production is MLflow. Let's learn together, what is MLflow. MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It tackles four primary functions that are extremely important in the machine learning lifecycle. These are tracking experiments to record and compare parameters and results. Then packing ML code in a reusable and reproducible form in order to share with other data scientists or transfer to production environment. Then the other aspect is managing and deploying models from a variety of machine learning libraries to a variety of model serving and inference platforms. Finally, there is providing a central model store to collaborate and manage the full lifecycle of a machine learning model, including model versioning, stage transitions, and annotations.

Let's start with the first one that is MLflow Tracking. MLflow runs can be recorded to a local file to a SQLAlchemy compatible database or remotely to a tracking server. You can log data to run using MLflow Python, but also R, or Java, or a REST API. MLflow allows you to group runs under experiments, which can be useful for comparing runs and also to compare runs that are intended to tackle a particular task, for example. Then there is the MLflow Projects. MLflow Project is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. In addition, the project's component includes an API and command line tools for running projects, making it possible to chain together project into workflows.

Then there is the MLflow Models. A model is a standard format for packaging machine learning models that can be used in a variety of downstream tools. For example, real time serving through a REST API or batch inference on Apache Spark. Each model is a directory containing arbitrary files, together with an MLmodel file in the root of the directory that can define multiple flavors that the model can be viewed in. Then there is the MLflow Registry. The MLflow model registry is a component that is a centralized model store, set of APIs, and also UI in order to manage in a collaborative way the full lifecycle of an MLflow model. It provides a model lineage, but also model versioning, stage transition, and annotation. The model registry is extremely important if you're looking for a centralized model store and a different set of APIs in order to manage the full lifecycle of your machine learning models.

Summary & References

These four aspects are extremely important. You should know them before deploying your machine learning solutions in production. Those four aspects are different MLOps capabilities, different open source integration and frameworks, different machine learning pipelines, and finally MLflow open source tool that can really help you with deploying machine learning in production. In this slide you can find a list of references to the different GitHub repos and documentation, some of the different open source tools that I have been using for this presentation.

Questions and Answers

Greco: Of all the stuff, certainly those four pillars, you need to have the font of those, like 64 point, because those are the critically important, those four pillars. It's amazing. Of those four, you certainly don't see new customers diving into those four. What are the big mistakes that customers make? They might not know about those four pillars, but what's a very common mistake, because there's certainly a lot of failed ML projects, more than we like, in the ML field. What are some common mistakes?

Lazzeri: One of the biggest mistakes that customers, but in general developers and also companies, do when we get to MLOps is about thinking of MLOps as a product. We have a lot of machine learning frameworks, we have a lot of cloud providers, and we have different AutoML capabilities. People think that MLOps is just another product or another set of tools that they need to add to their end-to-end machine learning solutions. That is not really the case. I think that it's more about culture, MLOps. It's more about thinking on how you can connect different tools in your end-to-end development experience, and how you can make sure that you are aware of these capabilities that I tried to summarize. How you can optimize some of these opportunities that you have. Really, MLOps is more about strategy and about making sure that you are aware of all the tools, and probably all of the tools that I was presenting are open source tools, which is great, because you have the support of the community. You can also contribute to those open source tools.

One of the biggest mistakes that customers and companies do most of the time is to think about MLOps as a static tool that you can just implement as you are implementing a machine learning or a deep learning framework. It's not really like this. It's more about making sure that you are aware of all these opportunities that you have on the table and you are able to connect these different options that you have in a way that is the best way for you and for your application.

Another important mistake that some of the companies I have been working on in the machine learning space are doing is also, they do not share the information between different professionals. Most of the time, we have data scientists that are speaking their own language, they are very familiar with some of the most important open source frameworks for machine learning, deep learning, and reinforcement learning. I mentioned some of them: RAY, TensorFlow, and PyTorch. Then, they're not really aware or familiar with the different open source tools to make sure that the deployment is successful. Then, how we can move them out of the machine learning model to a production environment and make sure that we can build an AI application that other people can consume. That is, in my opinion, another cultural aspect.

I think that it's important to have a technical team. As a manager, or as a developer, you need to know that there are different professionals that are probably working with specific tools, but you have to make sure that they communicate to each other so that they all have a very good understanding, a clear understanding of what the end-to-end solution is going to do. Most importantly, what's the final outcome that you want to solve to support with these end-to-end solutions? It's all about talking with data scientists, developers, data engineers, machine learning engineers. Different companies have different names and titles for these professionals. At the end of the day, those are all people who work in the machine learning industry. Some of them are responsible for the data preparation and the model training, testing, evaluation, and then deployment. Some others are responsible for the data pipelines and the model pipelines, and how they can deploy these models in production in a successful way. That is another mistake that most of the professionals in the industry do when we get to MLOps.

One of the biggest issues that we have at the moment in the industry is that we have so many great capabilities to develop machine learning models in open source frameworks. This is fantastic, because you're not just using inbuilt models from a specific provider, but you're leveraging the knowledge and the support of the open source community. Then, the issue that we have is that 80% of these models, they're never pushed to production, and they're never really used for a specific business case. Making sure that you are aware of these open source and MLOps capabilities, I think, is the key to make sure that you know how to put together all these different pieces, and how you can make sure that your team is talking, and they are all part of the same solution and of the same goal.

Greco: That's no different than traditional application building these days. It's like you better have an idea of what is the end goal, what problem are you solving. It's very important. I know some failures were, 10 data scientists were hired for a project and they failed to put in production and they failed to do data engineering. They were data scientists. You didn't have the rest of the team. It's certainly an issue.

You did talk about putting the models, whether it's through ONNX, or some other standard mechanism, into production. It seems like there is an interesting trend now of using multiple models, using multiple cooperating models, or maybe not even coop, maybe we have adversarial models, we have different models to use. How do you deploy something like that when you have different models? Any tricks or any tips on that?

Lazzeri: The first suggestion that I have for those types of data scientists that are using different typologies of models is to understand why they're using different models. It's more like models that are answering the same question, or some of those models are actually creating data features that then are used to feed other models, so we have a more like process of different models. Because if it is more like the second scenario where we have multiple models that are working together but process in a specific order, because some of those models are generating that type of information data that you need in order to run other models, it's a less complex situation. Because again, at the end of the day, you need to generate specific results that are generated only from the latest model that you have. You can use simple mechanisms, like you can deploy it in Python. You can create your pickle file, where you have just to make sure that you summarize all the important information. Most of the time, you summarize this information for your model with a Python function, we call them init and run functions. These are functions that you can just write in Python to make sure that you define how the data needs to be ingested, and then how the model needs to run. Then you proceed with a normal deployment process that you can do, like in any programming language that you prefer. The goal for that scenario is to generate this pickle file that then is going to be translated into a web application, web service. It is more like an API that other engineers can leverage to run this application. That is the second scenario. I started with the simplest one.

The first scenario that is actually multiple models that are somehow running in parallel also to generate the same type of insight, predictions in order to support the same scenario. In that case, using tools such as ONNX, that is at the end of your machine learning architecture, in order to standardize, normalize all the different languages and all the different frameworks that you're using. I think that is the best scenario that you have. That is my suggestion. Again, my suggestions are all based on my experience as a data scientist manager. Based on what I've been seeing is that most of the time you are running multiple machine learning pipelines at the same time, because, again, you want to scale your solution using a standardizations tool at the end, like ONNX, is the best tool, at least, until now that I have been using.

The other quick suggestion that I have is about automated machine learning. That is another tool that many different providers have been using a lot. Automated machine learning is not just a blackbox tool, you can consume it with the Python SDK. Basically, what it does for you is not just selecting the best model, but it's actually running multiple models in parallel. Then it's doing hyperparameter tuning for you. It's also trying to select the best model based on your scenario and on your input data. That is another way to scale your solution, not again, because you cannot select your own algorithm. At the end of the day, you are going to be the one who is going to select that. It's just an additional tool that can help you to scale your solution and also Boost the time to production. It's between ONNX and automated machine learning, those are the two tools and the two suggestions that usually I have for these type of machine learning scenarios.

Greco: We had a question about monitoring tools for models. Monitoring in a sense of prediction, accuracy, or application performance? Any suggestions there?

Lazzeri: Monitoring model in production is something that I have been doing more with machine learning pipelines. One of the tools, the Python packages that I have been using for that is actually collecting all the log information in the machine learning pipeline. This information is not just about metrics, but it's also about performance. It's basically extracting a report for you. This report is telling you how the model is performing both from an accuracy point of view, like if the model refreshing new data is still performing well. In a sense, it's still exporting good, accurate results or not. Then it's also giving you additional information on the genuine performance of the model. Like, is the model really healthy? Is it still performing in an accelerating way or no? That is something that I have been using.

The monitoring of the model is still a more manual type of work that I have been using. It's true that there is this package that is producing those reports for me, which is great. Based on my experience, we haven't built an anomaly detection model that is telling me, the model decreases performance, or the new data that the model is ingesting are not as good as the ones that we use in order to build the model. There are many different messages that this additional algorithm or solution can provide to me, and we haven't really done that as yet. For me, it's more like a manual check-in. However, I use this additional package that is providing me this very accurate report that is still very easy to digest, to look at. There is still some support. This is what I have done so far.

See more presentations with transcripts

Sun, 16 Oct 2022 04:45:00 -0500 en text/html https://www.infoq.com/presentations/ml-production-tips/
Killexams : What Are Audi's Designers Thinking?

With the shift away from internal-combustion engines to electrification and the march toward automated and autonomous vehicles, we're in the most transformative moment in automotive history.

Audi is one of the earliest adopters of new technology, and as engineering evolves, so does design. Rather than creating radically different designs for its initial e-tron EV offerings, Audi opts for more traditional styling that creates a bridge between the past and future, but that's only the first step.

We sat down with Oliver Hoffmann, Audi's chief development officer, and head of design Marc Lichte in Audi's Malibu, California, Design Loft to hear their thoughts on Audi's next steps.

As Lichte points out, having design studios in both California and Beijing allows designers to draw inspiration from Audi's most important markets. There are different sensibilities related to each, but appearances must still remain unmistakably Audi. As he describes it, "A Coke bottle is recognizable anywhere in the world, but the tastes vary slightly depending on the region."

With the advent of EV architecture's "skateboard" chassis, designers have newfound freedom with fewer of the constraints found with internal-combustion drivelines. That's not to say battles between designers and engineers are a thing of the past. Hoffman quips that there are still discussions on millimeter scales, usually in regard to vehicle height.

Design technology is also evolving. Upon entering the Malibu Design Loft, there's no scent of clay or markers. Everything is now digital and incorporates 3D VR modeling for a more seamless and efficient workflow that spans continents. As a result, Audi has been creating concepts at a rapid pace.

Audi christened the Design Loft last year with a rollout of the Skysphere variable-wheelbase concept. The "sphere" nomenclature refers to the interior space, which receives priority over exterior styling at first. China responded with the Urbansphere minivan-esque vehicle, which gives us a glimpse of how automated driving will affect interiors since the driver will be freed from driving duties.

audi activesphere concept – this is the name of the fourth model in the family of concept cars that audi has been introducing since august 2021 not only do they all have electric drives, but they’re also designed to be capable of automated driving this technical layout gives rise to entirely new designs, especially of the interiors and the offerings for those on board to use their time productively or just relax audi’s sphere concept cars collectively showcase the vision of the premium mobility of tomorrow the audi activesphere concept, which is set to debut at the beginning of 2023, will offer maximum variability for an active lifestyle – both on and off road the brand will show off the three members of the sphere family that have already been introduced – the audi skyphere, grandsphere and urbansphere concepts – for the first time together during monterey car week in california in august 2022

Audi’s Activesphere concept is coming in early 2023.

Audi

Next up is the forthcoming Activesphere concept, which Lichte says will integrate automated driving and represent the next big step in Audi's design direction. He hints that even the definition of an SUV will evolve as vehicles will have reduced ride heights to maximize aerodynamic efficiencies.

His enthusiasm for this next concept is palpable, and we're admittedly excited to see how Audi adapts and evolves to this changing landscape. One thing is certain: the future will look very different.

This content is imported from OpenWeb. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Thu, 06 Oct 2022 10:08:00 -0500 en-us text/html https://www.caranddriver.com/news/a41487790/audi-designers-activesphere-concept/
Killexams : Spotlight on Mathematical Thinking

The Education Week Spotlight on Mathematical Thinking is a collection of articles hand-picked by our editors for their insights on current math learning gaps, strategies for empowering students with mathematical thought, how to start students on the path of fluency, how teachers are giving learners the tools to solve the world’s puzzles, how early math supports can help vulnerable students, and more.

Tue, 27 Sep 2022 08:23:00 -0500 en text/html https://www.edweek.org/products/spotlight/spotlight-on-mathematical-thinking
Killexams : ‘Visual Thinking’ Review: Do You See What I’m Saying?

Tue, 11 Oct 2022 10:21:00 -0500 en-US text/html https://www.wsj.com/articles/visual-thinking-review-do-you-see-what-im-saying-11665526872
Killexams : Trump Says You Can Declassify Something 'Even by Thinking About It'

Former President Donald Trump is defending his handling of sensitive government records, telling Fox News host Sean Hannity that the president can declassify materials with just a thought.

Trump made the remarks Wednesday evening in response to litigation that's increasingly centering on whether Trump properly declassified thousands of documents seized by FBI agents from his Mar-a-Lago home in Florida. Speaking to Hannity at Mar-a-Lago, Trump reiterated his claim that he declassified the documents in addition to declaring a sweeping new ability for the president to do so.

"You can declassify just by saying it's declassified, even by thinking about it. Because you're sending it to Mar-a-Lago or to wherever you're sending," said Trump. "And it doesn't have to be a process. There can be a process, but there doesn't have to be. You're the president. You make that decision. So when you send it, it's declassified. Because I declassified everything."

Former President Donald Trump speaks to supporters at a rally to support local candidates at the Mohegan Sun Arena on September 3, 2022, in Wilkes-Barre, Pennsylvania. Trump told Fox News host Sean Hannity on Wednesday that the president can declassify documents "even by thinking about it." Spencer Platt/Getty Images

FBI agents carried out a court-approved search of Trump's home in August as part of a Department of Justice (DOJ) investigation into whether the former president was hoarding classified and other sensitive documents. Trump responded with a civil lawsuit that successfully sought the appointment of a special master, an independent arbiter to sort out private materials hauled away by FBI agents.

DOJ lawyers have argued in court filings that Trump has not proven that he declassified the documents. Trump's legal team additionally has not argued that the documents were declassified.

Judge Raymond Dearie, the recently appointed special master in the case, asked Trump's lawyers to provide details on the former president's declassification of documents. Trump's legal team earlier this week objected and suggested that doing so would mean disclosing their defense to a potential future indictment of the former president.

Dearie reportedly told Trump's lawyers during a hearing Tuesday that they cannot "have your cake and eat it too."

Others have disputed Trump's claim that he declassified documents he took to Mar-a-Lago at the end of his presidency—including members of his inner circle.

Mick Mulvaney, who served as Trump's acting chief of staff from January 2019 to March 2020, said during a Newsmax interview in August that there's "a formal structure" to declassifying documents.

"You can't just sort of stand over a box of documents, wave your hand and say these are all declassified," he said. "That's not how the system works."

Other high-ranking members of Trump's administration, including former Chief of Staff John Kelly and former national security adviser John Bolton, said they were unaware of any "standing order" to declassify documents taken to his residence.

Newsweek reached out to the DOJ for comment.

Thu, 22 Sep 2022 01:19:00 -0500 en text/html https://www.newsweek.com/trump-says-you-can-declassify-something-even-thinking-about-it-1745174
Killexams : Stumped By False Dilemmas? Try Both/And Thinking

My friend Mike is on the road most of each week, eating all of his meals in restaurants. His wife Darlene is at home, cooking every meal for herself and the kids. When Mike returns to home on Friday, he finds Darlene tired of cooking and eager for a break. Naturally, the last thing Mike wants is another restaurant meal.

For some couples, this situation would be ripe with potential conflict. But Mike and Darlene figured this one out years ago.

While Mike is out of town, Darlene does the meal planning and grocery shopping for the weekend. Then on Friday night and through the weekend, Mike (while teaming up with the couple’s teenagers) prepares all the meals. Not only has he developed some culinary skills, he and Darlene have learned how to manage other false dilemmas.

What’s a false dilemma? It’s a logical fallacy involving a situation in which only two alternatives are considered, when in fact there are additional options (sometimes shades of gray between extremes).

As someone once said, “When life gives you a dilemma, make dilemma-nade.”

False dilemmas are everywhere we look—not just in personal relationships, but in every facet of our lives, including the daily navigation of our careers.

An insightful guide in making “dilemma-nade” can be found in Both/And Thinking: Embracing Creative Tensions to Solve Your Toughest Problems by Wendy K. Smith and Marianne W. Lewis.

Wendy Smith is a professor of management at the University of Delaware, and Marianne Lewis is dean of the College of Business at the University of Cincinnati.

The authors combine more than 25 years of experience and research on navigating paradoxes to offer leaders, policymakers, and individual contributors a practical approach for dealing with everyday challenges.

Rodger Dean Duncan: You say developing both/and thinking begins by noticing “the paradoxes that lurk beneath our presenting dilemmas.” Please deliver us a couple of examples of such paradoxes.

Wendy K. Smith: Underlying our personal and societal dilemmas are interdependent opposites like today-tomorrow, self-other, give-take, global-local. These yin-yangs define and inform one another, while also pulling in opposing directions.

For example, consider a career dilemma of whether to stay at a current job or take a new position. Underlying this dilemma are paradoxes between loyalty and opportunity, personal and company needs. Similarly, organizations face dilemmas when deciding whether to invest in current products or innovation. Beneath these challenges lie paradoxes of short-term and long-term, stability and change.

Duncan: From your years of research, you’ve developed what you call “the paradox system” for helping people shift how they think and feel when navigating a paradox. Tell us how that system works.

Smith: Imagine navigating the paradoxes of a career dilemma. Doing so is not just about how you make a decision. You also have to manage your emotions. You have to think about the context that informs the decision, and how you will continue learning about the decision. That is, you need a variety of tools to navigate paradox. We bring these together into what we call The Paradox System. Two big ideas highlight how it works.

First, the Paradox System contains tools labeled ABCD for ease of memory:

Assumptions – change your mindset and questions from either/or to both/and

Boundaries – create structures that can contain the tensions

Comfort – find comfort in the discomfort of navigating paradoxes

Dynamics – engage change and experimentation for ongoing learning.

Second, navigating paradoxes is paradoxical. The tools that help us navigate paradoxes are themselves paradoxical. The Paradox System involves assumptions and comfort, engaging our head and hearts, as cognition and emotions reinforce each other. The system also involves boundaries and dynamism. Stable boundaries can unleash creative improvisation, while change can enhance and clarify boundaries.

For example, Paul Polman built a paradox system as CEO of Unilever to achieve the Unilever Sustainable Living Plan (USLP). Viewing financial and social responsibilities as a paradox, he helped employees value tensions between growing profit while reducing their environmental impact (assumptions). He created structure that sharpened focus on these competing demands, specifying roles, metrics and goals for both profit and sustainability (boundaries). He also fostered a culture where employees could discuss concerns and uncertainties (comfort). Finally, Unilever leaders constantly experimented, innovating with new practices and stakeholder partnerships to Boost how they managed both social and financial demands (dynamics).

We can apply these same tools to our personal decisions as well.

Duncan: Why do people allow themselves to be bullied by “either/or” thinking?

Marianne W. Lewis: As a dean, I often hear people grappling with career dilemmas. “Should I focus on excelling in my current job, strengthening my expertise and organizational potential? Or should I take a leap, learning new skills and exploring possibilities?” This kind of dilemma creates anxiety and uncertainty. Either/or thinking offers a sense of control. We weigh the pros and cons of opposing demands and make a decision. A or B? In the short-term, clear, consistent answers reduce the discomfort of tensions. In the longer-term, however, they create limitations.

For example, in response to career dilemmas, some people get stuck, waiting for a clear sign that tips the scales, or pushing so hard for successive promotions that they burn out and neglect wider opportunities. Others get in a jumping habit, always seeking a better opportunity. Moves can be exciting, but they miss means to deepen learning, impact, and community. Why limit our options? Talent is treasured and loyalty scarce, and technologies enable learning and work in many modes.

Duncan: What sort of cognitive traps seem to be the most common impediments to problem-solving?

Lewis: Paradoxes create cognitive dissonance, the discomfort of inconsistencies. We might face inconsistencies between current experience and our past understanding.

For example, you might find that a political foe has long volunteered for a cause you champion. We can experience inconsistencies between what is said and what is done. Your supervisor stresses greater innovation, but closely monitors your productivity. Or you might face inconsistencies between plans and outcomes. The more globally coordinated your organization becomes, the more local managers stress their regional differences.

Acclaimed psychologist, Paul Watzlawick, explained: “Paradox is the Achilles heel of our logical, analytical, rational world view. It is the point at which the seemingly all-embracing division of reality into pairs of opposites, especially the Aristotelian dichotomy of true and false, breaks down and reveals itself as inadequate.”

To reduce dissonance, we seek clear, consistent answers. Yet either/or thinking limits our options. In contrast, navigating paradoxes takes curiosity and openness. Allowing oneself to doubt, such as by questioning assumed cause-effect relationships, also enables wisdom. Surprising observations can build when understanding the paradox of knowledge: the more we know, the more we know we don’t know.

Duncan: What role do emotional and behavioral traps play in people’s struggles with everyday challenges in the workplace and in family life?

Smith: Emotions play a powerful role in responding to paradox, as the discomfort of co-existing contradictions sparks anxiety. As Freud and subsequent psychologists stressed, anxiety threatens the ego. Because paradoxes surprise and confuse us, they challenge us, throwing into question our existing mindsets, identities, skills, and behaviors. Yet our behaviors create habits that reinforce our existing approach, preventing change.

Career growth is a good example. Successful job performance is often rewarded with more challenging opportunities. Yet greater responsibilities require learning and changes that initially diminish performance. Performing and learning work in tandem, ebbing and flowing throughout our careers. Navigating the paradox starts with embracing the tensions.

Duncan: For many people, trying to navigate a paradox simply leads to cycles of counter-productivity. What patterns seem to be the most common, what’s your advice for avoiding them?

Lewis: We identify three patterns of vicious cycles stemming from either/or thinking. The first is intensification. People make a choice, then continually reinforce that choice. This might be fine for a while, but when situations change, it’s hard to pull out of the rabbit hole. Firms like Blackberry and Blockbuster fell into this trap. Singularly focused on their market-leading products, they avoided innovation despite shifting technology.

Second is the pattern of over-correction. Anyone who has been on diets knows the swings between excessive discipline and excessive indulgence. We describe over-correction as a wrecking ball, swinging to extremes and creating all kinds of destruction.

The final pattern is polarization. Groups reinforce their own side while diminishing and ultimately dehumanizing the other. Polarization defines our current political landscape, while seeping into our family dinners and harming friendships. We depict polarization as trench warfare—each side digs in, reinforces their perspective and shoots at the opposition. Instead of creative problem solving, both sides end up with lots of casualties.

Duncan: “Balancing” professional life with personal life is a constant juggling act for most people. How can both/and thinking help?

Smith: As a mom of three kids, I constantly feel the tug-of-war between work and home. When my twins were first born, my conversations with friends and colleagues reinforced either/or thinking. I remember thinking that there must be a better way.

Understanding two patterns of both/and thinking can help navigate work/life tensions. The first is a win/win solution. We describe this creative integration as a mule—a hybrid stronger than a horse, smarter than a donkey. When people think “both/and” they typically envision a win/win. Yet these are relatively rare. In the work/life tension, our work might become our life—we open a daycare so our work involves taking care of our kids, or we charter a fishing boat so that our life’s passion is our work.

More often, navigating paradox happens through what we call tightrope walking—or being consistently inconsistent. Tightrope walkers look out to a point in the distance, then get there by making microshifts between left and right. They never achieve a static balance but are consistently balancing. They also don’t veer too far left or right, or else they fall. Work/life tensions often require consistent inconsistency—arriving home for dinner some nights and working late others. Being too focused on work can foster burnout and harm personal relationships. Being too focused on home can lead to lost productivity and career opportunities. Oscillating between the two allows us to find ways that our energy from work can enhance our time at home and vice versa.

Duncan: You say both/and thinking begins with shifting our underlying assumptions in three areas. Please tell us about those.

Smith: Paradox mindsets enable both/and thinking. Such mindsets shift our understandings of:

Knowledge—from one truth to multiple truths

Resources—from scarcity to abundance

Problem solving—from controlling to coping

First, paradox mindsets assume that there are multiple truths. Conflicts often happen when each person believes they have the truth. Therefore, the other person must be wrong. The other day, my husband and I were discussing a parenting issue. We both believed that we had the right—and only—answer. We got stuck in a rut, each defending our perspective. We didn’t get any further until we were both able to come back to the issue and do a better job of listening to each other.

Second, paradox mindsets involve shifting our thinking about resources from a scarcity to an abundance perspective. For a project manager, either/or thinking means that allocating team time to one project means taking it from another. Both/and thinking starts with asking how we can expand the value of resources. While there might be 24 hours in a day, some hours are more productive for some than others. Can we leverage team members’ differing personal biorhythms or time zones? Likewise, time management experts recommend starting with big projects, because little projects can get done in smaller time chunks. Expanding the value of resources, we can explore more creative alternatives.

Finally, paradox mindsets invite us to let go of control. Opposing ideas and demands raise anxiety. Making a clear choice offers us comfort. But our toughest problems are messy and complicated. Consider the pandemic. Trying to minimize fear, lots of people tried to assert clear decisions in an uncertain, complex, ever-changing space. Both/and thinking approaches problem solving as coping. Navigating paradoxes through listening, experimenting, and adapting as we learn more. Letting go of control recognizes that there are many possible paths, and options appear as we move forward.

Duncan: How can both/and thinking help people deal productively with the discomforts and anxieties associated with organizational change?

Lewis: Organizational change surfaces paradoxes of stability-change, tradition-innovation, short-term-long-term. Both/and thinking helps people tap into their positive potential. Stability provides a foundation for change, while change enables more resilient stability. Holding onto core values, traditions, and partnerships can support change, making the process less chaotic and uncertain. For example, when LEGO makes bold strategic changes, they do so while staying true to their core technology (the interlocking brick) and mission (to inspire builders of tomorrow).

Duncan: In what ways can leaders model both/and thinking as a bedrock practice in their organizational cultures?

Smith: As we noted earlier, Paul Polman offered a great example when turning around Unilever.

He did two things. First, he embedded both/and thinking into the organizational culture and structures. He identified a higher purpose—making sustainable living commonplace. This vision motivated leaders to embrace opposing demands. Adding to traditional business roles, he elevated guardians of the social and environmental mission. He also diversified his senior leadership and board to encourage opposing views. And (most controversially), he stopped offering quarterly guidance to investors to empower longer-term decision making. We describe these moves as guardrails—structures, metrics, people, and goals that prevent the organization from focusing too much on one pole or another.

Second, Polman invited people into both/and thinking. He constantly asked individuals to name their tensions so that all could learn and work through them. He taught leaders skills for managing conflicts. And he asked everyone to align their annual goals to the sustainable living plan, helping personalize its paradoxes for each employee. Good leaders both create the organizational conditions and support individuals to embrace paradoxes.

Sun, 02 Oct 2022 18:40:00 -0500 Rodger Dean Duncan en text/html https://www.forbes.com/sites/rodgerdeanduncan/2022/10/03/stumped-by-false-dilemmas-try-bothand-thinking/
Killexams : Sloan – “Magical Thinking”

Sloan are releasing a new album, Steady, in just about a month’s time. The Canadian band has shared “Spend The Day” and “Scratch The Surface” from it so far, and today they’re back with another single, the punchy “Magical Thinking.

“This song lampoons the idea of anyone who thinks that their feelings trump science,” the band’s Chris Murphy said. “Yes, I think being alive is a miracle and that we should all be grateful but people’s beliefs ultimately mean nothing and whatever those beliefs are they shouldn’t become legislation or be tax exempt and I shouldn’t have to respect them. And I don’t.” Check it out below.

Steady is out 10/21 via murderrecords/Universal Music Canada.

Thu, 22 Sep 2022 02:33:00 -0500 en text/html https://www.stereogum.com/2200437/sloan-magical-thinking/music/
Killexams : Trump to Hannity: Presidents can declassify documents 'by thinking about it'

Presidents can declassify documents on a whim, even just by "thinking about it," former President Donald Trump told Fox News' Sean Hannity on Wednesday.

Trump appeared on Hannity's show to discuss the ongoing federal investigation into classified documents found in Trump's Mar-a-Lago home during an FBI raid.

"Is there a process? What was your process to declassify?" Hannity asked.

"There doesn't have to be a process, as I understand it," Trump responded. "You know, different people say different things, but as I understand it there doesn't have to be."

DOJ ASKS 11TH CIRCUIT FOR PARTIAL STAY, ALLOWING ATTORNEYS TO USE CLASSIFIED DOCS DURING SPECIAL MASTER REVIEW

Former President Donald Trump speaks at the Conservative Political Action Conference in Dallas, Texas, on Aug. 6, 2022. (Brandon Bell/Getty Images)

"If you're the president of the United States you can declassify just by saying: ‘It’s declassified.' Even by thinking about it," he added. "There doesn't have to be a process. There can be a process, but there doesn't have to be. You're the president, you make that decision…I declassified everything."

NEW YORK AG SUES TRUMP OVER FRAUD ALLEGATIONS

Classification is a system within the executive branch for privileging information, typically relying on three tiers: confidential, secret and top secret. Only those with proper clearance levels can handle or be told about the information in a classified document.

This image contained in a court filing by the Department of Justice on Aug. 30, 2022, and redacted in part by the FBI, shows a photo of documents seized during the Aug. 8 search of former President Donald Trump's Mar-a-Lago estate in Florida. (Department of Justice via AP)

Declassifying documents typically follows a process in which the agency to which the information pertains is consulted. Then, an officially designated "original classification authority" would move to declassify the document, according to the New York Times.

While presidents do have the authority to declassify documents on their own, the relevant agencies would still have to be informed of the move for a formal declassification to take place.

Attorney General Merrick Garland speaks at the Department of Justice in Washington, D.C., on Aug. 11, 2022. (Drew Angerer/Getty Images)

The FBI found roughly 100 documents marked to varying levels of classification inside Mar-a-Lago.

The DOJ has opened a criminal investigation into Trump's handling of the files. A federal appeals court allowed investigators to continue inspecting the files in a Wednesday ruling, dealing a blow to Trump's argument that the documents were no longer classified.

CLICK HERE TO GET THE FOX NEWS APP

The files are being looked at by Judge Raymond Dearie, a special master appointed for that purpose at the request of Trump's legal team.

Thu, 22 Sep 2022 06:49:00 -0500 Fox News en text/html https://www.foxnews.com/politics/trump-hannity-presidents-can-declassify-documents-thinking-about-it
Killexams : Trump claims he can declassify top secret documents just ‘by thinking about it’

Donald Trump says he can declassify top secret government documents just “by thinking about it”.

In an interview with Fox News host Sean Hannity, the former president repeated his claim that dozens of secret and confidential papers seized at this Mar-a-Lago home had been declassified.

Asked about the process needed to do that, Mr Trump said: “You know, there’s different people say different things.

“There doesn’t have to be (a process), as I understand it. If you’re the president of the United States, you can declasify just by saying ‘it’s declassified’, even by thinking about it. Because you’re sending it to Mar-a-Lago or to wherever you’re sending it.

“And there doesn’t have to be a process. There can be a process, but there doesn’t have to be. You’re the president, you make that decision.

“So when you send it, it’s declassified. We – I declassified everything.”

He went on to suggest that the National Archives, which had been trying to recover the documents Mr Trump took to Mar-a-Lago, has “a radical left group of people running that thing”. He added that “when you send documents over there, I would say there’s a very good chance that a lot of those documents will never be seen again”. He did not elaborate on what he meant and was not pressed on the issue by Hannity.

The question of whether the papers found at his Florida home remained classified or not – and how much bearing that has on whether he was allowed to have them – has been debated since the 8 August search.

While Mr Trump has repeatedly claimed that the papers were declassified, his lawyers have hesitated about making the same claims in court.

On Tuesday the matter came before federal judge Raymond J Dearie, who was appointed a special master – essentially an impartial arbiter – at Mr Trump’s insistence. He asked Mr Trump’s legal team what measures, if any, the former president had taken to declassify the papers.

When they declined to deliver any details, Judge Dearie said: “My view is, you can’t have your cake and eat it too.”

Critics have suggested that Mr Trump’s insistence that a special master be appointed is little more than a delaying tactic in a case that could have serious legal repercussions for him.

The Department of Justice is investigating whether the former president broke the law by hoarding the papers, some of them the very highest level of security, at his home after the National Archives spent the best part of a year trying to get them back.

Mr Trump insists he did nothing wrong and is the victim of a politically-motivated “witch hunt”.

Thu, 22 Sep 2022 01:25:00 -0500 en text/html https://www.independent.co.uk/news/world/americas/us-politics/trump-declassify-secret-fbi-thinking-b2172633.html
CPIM-MPR exam dump and training guide direct download
Training Exams List