Python online courses are educational programs that teach users about Python, a high-level programming language.
Python is not too difficult to learn and is generally used to develop websites and software, among other things.
Key takeaways
|
Enterprises today contain a mix of services, legacy applications, and data, which are topped by a range of consumer channels, including desktop, web and mobile applications. But too often, there is a disconnect due to the absence of a properly created and systematically governed integration layer, which is required to enable business functions via these consumer channels. The majority of enterprises are battling this challenge by implementing a service-oriented architecture (SOA) where application components provide loosely-coupled services to other components via a communication protocol over a network. Eventually, the intention is to embrace a microservice architecture (MSA) to be more agile and scalable. While not fully ready to adopt an MSA just yet, these organizations are architecting and implementing enterprise application and service platforms that will enable them to progressively move toward an MSA.
In fact, Gartner predicts that by 2017 over 20% of large organizations will deploy self-contained microservices to increase agility and scalability, and it's happening already. MSA is increasingly becoming an important way to deliver efficient functionality. It serves to untangle the complications that arise with the creation services; incorporation of legacy applications and databases; and development of web apps, mobile apps, or any consumer-based applications.
Today, enterprises are moving toward a clean SOA and embracing the concept of an MSA within a SOA. Possibly the biggest draws are the componentization and single function offered by these microservices that make it possible to deploy the component rapidly as well as scale it as needed. It isn't a novel concept though.
For instance, in 2011, a service platform in the healthcare space started a new strategy where whenever it wrote a new service, it would spin up a new application server to support the service deployment. So, it's a practice that came from the DevOps side that created an environment with less dependencies between services and ensured a minimum impact to the rest of the systems in the event of some sort of maintenance. As a result, the services were running over 80 servers. It was, in fact, very basic since there were no proper DevOps tools available as there are today; instead, they were using Shell scripts and Maven-type tools to build servers.
While microservices are important, it's just one aspect of the bigger picture. It's clear that an organization cannot leverage the full benefits of microservices on their own. The inclusion of MSA and incorporation of best practices when designing microservices is key to building an environment that fosters innovation and enables the rapid creation of business capabilities. That's the real value add.
The generally accepted practice when building your MSA is to focus on how you would scope out a service that provides a single-function rather than the size. The inner architecture typically addresses the implementation of the microservices themselves. The outer architecture covers the platform capabilities that are required to ensure connectivity, flexibility and scalability when developing and deploying your microservices. To this end, enterprise middleware plays a key role when crafting both your inner and outer architectures of the MSA.
First, middleware technology should be DevOps-friendly, contain high-performance functionality, and support key service standards. Moreover, it must support a few design fundamentals, such as an iterative architecture, and be easily pluggable, which in turn will provide rapid application development with continuous release. On top of these, a comprehensive data analytics layer is critical for supporting a design for failure.
The biggest mistake enterprises often make when implementing an MSA is to completely throw away established SOA approaches and replace them with the theory behind microservices. This results in an incomplete architecture and introduces redundancies. The smarter approach is to consider an MSA as a layered system that includes an enterprise service bus (ESB) like functionality to handle all integration-related functions. This will also act as a mediation layer that enables changes to occur at this level, which can then be applied to all relevant microservices. In other words, an ESB or similar mediation engine enables a gradual move toward an MSA by providing the required connectivity to merge legacy data and services into microservices. This approach is also important for incorporating some fundamental rules by launching the microservice first and then exposing it via an API.
Significantly, the inner architecture needs to be simple, so it can be easily and independently deployable and independently disposable. Disposability is required in the event that the microservice fails or a better service emerges; in either case, there is a requirement for the respective microservice to be easily disposed. The microservice also needs to be well supported by the deployment architecture and the operational environment in which the microservice is built, deployed, and executed. Therefore, it needs to be simple enough to be independently deployable. An ideal example of this would be releasing a new version of the same service to introduce bug fixes, include new features or enhancements to existing features, and to remove deprecated services.
The key requirements of an MSA inner architecture are determined by the framework on which the MSA is built. Throughput, latency, and low resource usage (memory and CPU cycles) are among the key requirements that need to be taken into consideration. A good microservice framework typically will build on lightweight, fast runtime, and modern programming models, such as an annotated meta-configuration that's independent from the core business logic. Additionally, it should offer the ability to secure microservices using desired industry leading security standards, as well as some metrics to monitor the behavior of microservices.
With the inner architecture, the implementation of each microservice is relatively simple compared to the outer architecture. A good service design will ensure that six factors have been considered when scoping out and designing the inner architecture:
First, the microservice should have a single purpose and single responsibility, and the service itself should be delivered as a self-contained unit of deployment that can create multiple instances at the runtime for scale.
Second, the microservice should have the ability to adopt an architecture that's best suited for the capabilities it delivers and one that uses the appropriate technology.
Third, once the monolithic services are broken down into microservices, each microservice or set of microservices should have the ability to be exposed as APIs. However, within the internal implementation, the service could adopt any suitable technology to deliver that respective business capability by implementing the business requirement. To do this, the enterprise may want to consider something like Swagger to define the API specification or API definition of a particular microservice, and the microservice can use this as the point of interaction. This is referred to as an API-first approach in microservice development.
Fourth, with units of deployment, there may be options, such as self-contained deployable artifacts bundled in hypervisor-based images, or container images, which are generally the more popular option.
Fifth, the enterprise needs to leverage analytics to refine the microservice, as well as to provision for recovery in the event the service fails. To this end, the enterprise can incorporate the use of metrics and monitoring to support this evolutionary aspect of the microservice.
Sixth, even though the microservice paradigm itself enables the enterprise to have multiple or polyglot implementations for its microservices, the use of best practices and standards is essential for maintaining consistency and ensuring that the solution follows common enterprise architecture principles. This is not to say that polyglot opportunities should not be completely vetoed; rather they need to be governed when used.
Once the inner architecture has been set up, architects need to focus on the functionality that makes up the outer architecture of their MSA. A key component of the outer architecture is the introduction of an enterprise service bus (ESB) or similar mediation engine that will aide with the connecting legacy data and services into MSA. A mediation layer will also enable the enterprise to maintain its own standards while others in the ecosystem manage theirs.
The use of a service registry will support dependency management, impact analysis, and discovery of the microservices and APIs. It also will enable streamlining of service/API composition and wire microservices into a service broker or hub. Any MSA should also support the creation of RESTful APIs that will help the enterprise to customize resource models and application logic when developing apps.
By sticking to the basics of designing the API first, implementing the microservice, and then exposing it via the API, the API rather than the microservice becomes consumable. Another common requirement enterprises need to address is securing microservices. In a typical monolithic application, an enterprise would use an underlying repository or user store to populate the required information from the security layer of the old architecture. In an MSA, an enterprise can leverage widely-adopted API security standards, such as OAuth2 and OpenID Connect, to implement a security layer for edge components, including APIs within the MSA.
On top of all these capabilities, what really helps to untangle MSA complexities is the use of an underlying enterprise-class platform that provides rich functionality while managing scalability, availability, and performance. That is because the breaking down of a monolithic application into microservices doesn't necessarily amount to a simplified environment or service. To be sure, at the application level, an enterprise essentially is dealing with several microservices that are far more simple than a single monolithic, complicated application. Yet, the architecture as a whole may not necessarily be less arduous.
In fact, the complexity of an MSA can be even greater given the need to consider the other aspects that come into play when microservices need to talk to each other versus simply making a direct call within a single process. What this essentially means is that the complexity of the system moves to what is referred to as the "outer architecture", which typically consists of an API gateway, service routing, discovery, message channel, and dependency management.
With the inner architecture now extremely simplified--containing only the foundation and execution runtime that would be used to build a microservice--architects will find that the MSA now has a clean services layer. More focus then needs to be directed toward the outer architecture to address the prevailing complexities that have arisen. There are some common pragmatic scenarios that need to be addressed as explained in the diagram below.
The outer architecture will require an API gateway to help it expose business APIs internally and externally. Typically, an API management platform will be used for this aspect of the outer architecture. This is essential for exposing MSA-based services to consumers who are building end-user applications, such as web apps, mobile apps, and IoT solutions.
Once the microservices are in place, there will be some sort of service routing that takes place in which the request that comes via APIs will be routed to the relevant service cluster or service pod. Within microservices themselves, there will be multiple instances to scale based on the load. Therefore, there's a requirement to carry out some form of load balancing as well.
Additionally, there will be dependencies between microservices--for instance, if microservice A has a dependency on microservice B, it will need to invoke microservice B at runtime. A service registry addresses this need by enabling services to discover the endpoints. The service registry will also manage the API and service dependencies as well as other assets, including policies.
Next, the MSA outer architecture needs some messaging channels, which essentially form the layer that enables interactions within services and links the MSA to the legacy world. In addition, this layer helps to build a communication (micro-integration) channel between microservices, and these channels should be lightweight protocols, such as HTTP, MQTT, among others.
When microservices talk to each other, there needs to be some form of authentication and authorization. With monolithic apps, this wasn't necessary because there was a direct in-process call. By contrast, with microservices, these translate to network calls. Finally, diagnostics and monitoring are key aspects that need to be considered to figure out the load type handled by each microservice. This will help the enterprise to scale up microservices separately.
To put things into perspective, let's analyze some actual scenarios that demonstrate how the inner and outer architecture of an MSA work together. We'll assume an organization has implemented its services using Microsoft Windows Communication Foundation or the Java JEE/J2EE service framework, and developers there are writing new services using a new microservices framework by applying the fundamentals of MSA.
In such a case, the existing services that expose the data and business functionality cannot be ignored. As a result, new microservices will need to communicate with the existing service platforms. In most cases, these existing services will use the standards adhered to by the framework. For instance, old services might use service bindings, such as SOAP over HTTP, Java Message Service (JMS) or IBM MQ, and secured using Kerberos or WS-Security. In this example, messaging channels too will play a big role in protocol conversions, message mediation, and security bridging from the old world to the new MSA.
Another aspect the organization would need to consider is any impact to its scalability efforts in terms of business growth given the prevalent limitations posed by a monolithic application, whereas an MSA is horizontally scalable. Among some obvious limitations are possible errors as it's cumbersome to test new features in a monolithic environment and delays to implement these changes, hampering the need to meet immediate requirements. Another challenge would be supporting this monolithic code base given the absence of a clear owner; in the case of microservices, individual or single functions can be managed on their own and each of these can be expanded as required quickly without impacting other functions.
In conclusion, while microservices offer significant benefits to an organization, adopting an MSA in a phased out or iterative manner may be the best way to move forward to ensure a smooth transition. Key aspects that make MSA the preferred service-oriented approach is clear ownership and the fact that it fosters failure isolation, thereby enabling these owners to make services within their domains more stable and efficient.
Asanka Abeysinghe is vice president of solutions architecture at WSO2. He has over 15 years of industry experience, which include implementing projects ranging from desktop and web applications through to highly scalable distributed systems and SOAs in the financial domain, mobile platforms, and business integration solutions. His areas of specialization include application architecture and development using Java technologies, C/C++ on Linux and Windows platforms. He is also a committer of the Apache Software Foundation.
The best Python online courses make it simple and easy to learn, develop, and advance your programming skills.
Python is one of the most popular high-level, general-purpose programming languages. Named after the comedy troupe Monty Python, the language has a user-friendly syntax that makes it very appealing to beginners. It’s also very flexible and scalable, and has a very vibrant, global community of users.
Thanks to its rich set of tools and libraries you can use Python for just about anything -- from web development and data analysis to artificial intelligence and scientific computing.
According to the TIOBE Index, Python is currently the most popular programming language in the world. In fact, Python is used in some form or the other in virtually all major tech companies around the world, which makes it one of the top-most demanded skills.
If you want to work with Python scripts, you'll need a text editor suitable for coding and an Integrated Developed Environment (IDE) to run them.
We've judged these Python online courses across various parameters, like their pricing plans, the simplicity of their tutorials, the quality of learning support they offered, and what user level they were aimed at. We also evaluated the pace of the courses, the number of learning resources they had, and whether provided useful features like subtitles.
So whether you are new to Python or to programming itself, here are some of the best Python online courses to help you get to grips with the language.
We've also featured the best laptop for programmers.
Skillshare offers several Python tutorials aimed at beginners, but very few are as comprehensive as "Programming in Python for Beginners". The Instructor has designed the course with the assumption that the students have absolutely no clue about programming. He’ll help you get started by setting up your Python development environment in Windows, before explaining all the basic constructs in the language and when to use them.
The course is made up of over 70 lessons for a total runtime of over 11 hours. The lessons will help you learn how the various arithmetic, logical and relational operators work and understand when to use lists, collections, tuples, dictionaries. The primer on functions is pretty usable as it shows you how to avoid common mistakes. The course also touches on some advanced syllabus like measuring the performance of your code to help write efficient code. There’s an exercise after every few lessons that’ll challenge you to put the newly acquired skills to solve a problem.
Note however that the Polish instructor has an accent, which didn’t bother us but your mileage may vary. Plus we liked the instructor’s engaging diction that made the course really interesting. He also actively engages with students in the discussions page on the course to clarify any doubts and share feedback on the exercises.
In terms of delivery, SkillShare has a rather vanilla player as compared to some of its peers. It does give you the ability to alter the play speed and add notes, but the lack of support for closed captions is disappointing. SkillShare offers a Free trial during which you can take any course in their library including this one.
Read our full SkillShare learning platform review.
Udemy offers a wide range of excellent courses, but their course, "The Python Mega Course: Build 10 Real World Applications", will be especially good for those who know some Python already. As its name suggests, the course teaches you how to build 10 practical apps using Python, from simple database query apps to web and desktop apps to data visualization dashboard, and more.
The instructor uses the Visual Studio Code IDE in the course that has over 250 videos divided into 33 sections. The first 8 sections cover the fundamentals of Python and another four cover advanced syllabus before you get to coding the 10 examples in the remainder of the course.
Many of the example apps are preceded by a section or two that teach the crucial elements in the example. For instance, before you build a desktop database app, you’ll learn how to use the Tkinter library to build GUIs and also how Python interacts with databases, particularly, SQLite, PostgreSQL and MySQL. The video lessons are supplemented by coding exercises and quizzes, and there’s also a Q&A section to post your questions to the instructor.
You can pay for the course once on Udemy to get lifetime access. The instructor regularly updates the course and once you’ve bought the course you’ll get these modifications for free. The learning experience is further enhanced by Udemy’s player, which is one of the best in the game. In addition to altering the playback speed, it’ll help you place bookmarks in the lectures.
To help you find areas of interest, it’ll also display popular locations bookmarked by other students. You also get closed captions in over a dozen languages and can even view an auto-scrolling transcript of the lessons. Furthermore, Udemy’s smartphone app has the option to download a lesson to the device for offline viewing.
Read our full Udemy learning platform review.
LinkedIn Learning offers a great range of professional development courses, and the course, "Advance your career with Python", is no different.
This course is designed for someone who has limited time and it’s ideal for you if you want a fast paced introduction to Python. The instructor uses the Anaconda distribution of Python and writes code in Jupyter Notebook. She doesn’t skip over any of the building blocks of the language and her lessons are nicely paced and well illustrated.
The good thing about the course is that instead of straightaway diving into coding a construct, which many fast-paced introductory courses do, the instructor begins each lesson by explaining the construct and its use. The course ends with a quick introduction to object-oriented programming.
LinkedIn Learning’s video player supports closed captions and you can also get a transcript for the course that you can use to jump into the lecture. The service also offers a free 1-month trial, which should be more than enough to take this course.
Read our full LinkedIn Learning review.
Coursera is another of our favorite online learning resources, and their "Principles of Computing" is a good course to expand your coding skills with Python. It's presented in two-parts and is offered by Rice University as part of the Fundamentals of Computing Specialization, which has a total of seven courses. The courses divide the lessons across several weeks, each of which has multiple video lectures, readings, practice exercises, homework quizzes, and assignments.
They are conducted by three Computer Science faculty members of Rice University and will upgrade your basic Python skills to help you think like a computer scientist. The courses introduce mathematical and computational principles, and how you can integrate them to solve complex problems, to enable you to write good code.
Coursera has a nice video player that offers closed captions and transcripts. You can also take notes at any point during the video lecture. Best of all you can download the video lectures in MP4 format as well as the subtitles and transcripts for offline viewing. You can audit the courses for free or earn a specialization certificate by subscribing to the service.
Read our full Coursera learning platform review.
edX provides an excellent range of free-to-access courses, and their "Analyzing Data with Python" course could be a great way for those with some Python coding skills to really break out into the wider field of data science.
This course equips you with all the skills you need to crunch raw data into meaningful information using Python, and will familiarize you with Python’s data analysis libraries including Pandas, NumPy, SciPy, and scikit-learn.
The self-paced course is divided into five modules with the sixth being the final assignment. Each module begins with a summary of the concepts that it’ll impart before it introduces the libraries and how they’re used to achieve the specified objective. There are quizzes and lab exercises to help you put the newly acquired knowledge to use.
The videos have closed captions as well as English transcripts that you can use to jump into the video. The course is conducted by IBM and requires you to put in 2-4 hours a week for five weeks. You can get a Tested certificate if you score over the specified minimum marks for the various exercises and quizzes.
Read our full edX learning platform review.
We've also featured the best Linux learning providers.
Python online courses are educational programs that teach users about Python, a high-level programming language.
Python is not too difficult to learn and is generally used to develop websites and software, among other things.
When deciding which of the best online Python course to use, first consider what level of competency you are currently at. If you've not learned Python and you've little experience with other programming languages then it would definitely be recommended to start with the beginner courses, as these will break you into the basics you'll need before you cover more advanced tools.
However, if you already have a decent amount of programming experience, especially with Python, then feel free to try your hand with the more advanced courses.
To test for the best online Python courses we searched for a range of popular options as well as took recommendations from people we know who are learning Python or who are already competent with it. Then we followed the tutorials to get an idea of how easy they were to follow, how easy it was to learn essential tools and processes, and additionally what sort of user level the courses were aimed at, such as beginner, medium, or advanced-level users.
See how we test, rate, and review products on TechRadar.
More online programming courses:
Aug. 2, 2022 — A new National Science Foundation initiative has created a $10 million dollar institute led by computer and data scientists at University of California San Diego that aims to transform the core fundamentals of the rapidly emerging field of Data Science.
Called The Institute for Emerging CORE Methods in Data Science (EnCORE), the institute will be housed in the Department of Computer Science and Engineering (CSE), in collaboration with The Halıcıoğlu Data Science Institute (HDSI), and will tackle a set of important problems in theoretical foundations of Data Science.
UC San Diego team members will work with researchers from three partnering institutions – University of Pennsylvania, University of Texas at Austin and University of California, Los Angeles — to transform four core aspects of data science: complexity of data, optimization, responsible computing, and education and engagement.
EnCORE will join three other NSF-funded institutes in the country dedicated to the exploration of data science through the NSF’s Transdisciplinary Research in Principles of Data Science Phase II (TRIPODS) program.
“The NSF TRIPODS Institutes will bring advances in data science theory that Boost health care, manufacturing, and many other applications and industries that use data for decision-making,” said NSF Division Director for Electrical, Communications and Cyber Systems Shekhar Bhansali.
UC San Diego Chancellor Pradeep K. Khosla said UC San Diego’s highly collaborative, multidisciplinary community is the perfect environment to launch and develop EnCORE. “We have a long history of successful cross-disciplinary collaboration on and off campus, with renowned research institutions across the nation. UC San Diego is also home to the San Diego Supercomputer Center, the HDSI, and leading researchers in artificial intelligence and machine learning,” Khosla said. ”We have the capacity to house and analyze a wide variety of massive and complex data sets by some of the most brilliant minds of our time, and then share that knowledge with the world.”
Barna Saha, the EnCORE project lead and an associate professor in UC San Diego’s Department of Computer Science and Engineering and HDSI, said: “We envision EnCORE will become a hub of theoretical research in computing and Data Science in Southern California. This kind of national institute was lacking in this region, which has a lot of talent. This will fill a much-needed gap.”
The core of EnCORE: co-principal investigators include (from l to r) Yusu Wang, Barna Saha (the principal investigator), Kamalika Chaudhuri, (top row) Arya Mazumdar and Sanjoy Dasgupta. (Not pictured, Gal Mishne).
The other UC San Diego faculty members in the institute include professors Kamalika Chaudhuri, and Sanjoy Dasgupta from CSE; Arya Mazumdar, Gal Mishne, and Yusu Wang from HDSI; and Fan Chung Graham from CSE and the Department of Mathematics. Saura Naderi of HDSI will spearhead the outreach activities of the institute.
“Professor Barna Saha has assembled a team of exceptional scholars across UC San Diego and across the nation to explore the underpinnings of data science. This kind of institute, focused on groundbreaking research, innovative education and effective outreach, will be a model of interdisciplinary initiatives for years to come,” said Department of Computer Science and Engineering Chair Sorin Lerner.
CORE Pillars of Data Science
The EnCORE Institute seeks to investigate and transform three research aspects of Data Science:
“EnCORE represents exactly the kind of talent convergence that is necessary to address the emerging societal need for responsible use of data. As a campus hub for data science, HDSI is proud of a compelling talent pool to work together in advancing the field,” said HDSI founding director Rajesh K. Gupta.
Team members expressed excitement about the opportunity of interdisciplinary research that the institute will provide. They will work together to Boost privacy-preserving machine learning and robust learning, and to integrate geometric and topological ideas with algorithms and machine learning methodologies to tame the complexity in modern data. They envision a new era in optimization with the presence of strong statistical and computational components adding new challenges.
“One of the exciting research thrusts at EnCORE is data science for accelerating scientific discoveries in domain sciences,” said Gal Mishne, an assistant professor at HDSI. As part of EnCORE, the team will be developing fast, robust low-distortion visualization tools for real-world data in collaboration with domain experts. In addition, the team will be developing geometric data analysis tools for neuroscience, a field which is undergoing an explosion of data at multiple scales.
From K-12 and Beyond
A distinctive aspect to EnCORE will be the “E,” education and engagement, component.
The institute will engage students at all levels, from K-12 to postdoctoral students, and junior faculty and conduct extensive outreach activities at all of its four sites.
The geographic span of the institute in three regions of the United States will be a benefit as the institute executes its outreach plan, which includes regular workshops, events, hiring of students and postdoctoral students. Online and joint courses between the partner institutions will also be offered.
Activities to reach out to high school, middle school and elementary students in Southern California are also part of the institute’s plan, with the first engagement planned for this summer with the Sweetwater Union High School District to teach students about the foundations of data science.
There will also be mentorship and training opportunities with researchers affiliated with EnCORE, helping to create a pipeline of data scientists and broadening the reach and impact of the field. Additionally, collaboration with industry is being planned.
Mazumdar, an associate professor in the HDSI and an affiliated faculty member in CSE, said the team has already put much thought and effort into developing data science curricula across all levels. “We aim to create a generation of experts while being mindful of the needs of society and recognizing the demands of industry,” he said.
“We have made connections with numerous industry partners, including prominent data science techs and also with local Southern California industries including start-ups, who will be actively engaged with the institute and keep us informed about their needs,” Mazumdar added.
An interdisciplinary, diverse field- and team
Data science has footprints in computer science, mathematics, statistics and engineering. In that spirit, the researchers from the four participating institutions who comprise the core team have diverse and varied backgrounds from four disciplines.
“Data science is a new, and a very interdisciplinary area. To make significant progress in Data Science you need expertise from these diverse disciplines. And it’s very hard to find experts in all these areas under one department,” said Saha. “To make progress in Data Science, you need collaborations from across the disciplines and a range of expertise. I think this institute will provide this opportunity.”
And the institute will further diversity in science, as EnCORE is being spearheaded by women who are leaders in their fields.
The concept of artificial intelligence dates back far before the advent of modern computers — even as far back as Greek mythology. Hephaestus, the Greek god of craftsmen and blacksmiths, was believed to have created automatons to work for him. Another mythological figure, Pygmalion, carved a statue of a beautiful woman from ivory, who he proceeded to fall in love with. Aphrodite then imbued the statue with life as a gift to Pygmalion, who then married the now living woman.
Throughout history, myths and legends of artificial beings that were given intelligence were common. These varied from having simple supernatural origins (such as the Greek myths), to more scientifically-reasoned methods as the idea of alchemy increased in popularity. In fiction, particularly science fiction, artificial intelligence became more and more common beginning in the 19th century.
But, it wasn’t until mathematics, philosophy, and the scientific method advanced enough in the 19th and 20th centuries that artificial intelligence was taken seriously as an actual possibility. It was during this time that mathematicians such as George Boole, Bertrand Russel, and Alfred North Whitehead began presenting theories formalizing logical reasoning. With the development of digital computers in the second half of the 20th century, these concepts were put into practice, and AI research began in earnest.
Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of reality. Generally, this can be chalked up to computing limitations. But, a deeper problem of the understanding of what intelligence actually is has been a source a tremendous debate.
Despite these setbacks, AI research and development has continued. Currently, this research is being conducted by technology corporations who see the economic potential in such advancements, and by academics working at universities around the world. Where does that research currently stand, and what might we expect to see in the future? To answer that, we’ll first need to attempt to define what exactly constitutes artificial intelligence.
You may be surprised to learn that it is generally accepted that artificial intelligence already exists. As Albert (yes, that’s a pseudonym), a Silicon Valley AI researcher, puts it: “…AI is monitoring your credit card transactions for weird behavior, AI is memorizing the numbers you write on your bank checks. If you search for ‘sunset’ in the pictures on your phone, it’s AI vision that finds them.” This sort of artificial intelligence is what the industry calls “weak AI”.
Weak AI is dedicated to a narrow task, for example Apple’s Siri. While Siri is considered to be AI, it is only capable of operating in a pre-defined range that combines a handful a narrow AI tasks. Siri can perform language processing, interpretations of user requests, and other basic tasks. But, Siri doesn’t have any sentience or consciousness, and for that reason many people find it unsatisfying to even define such a system as AI.
Albert, however, believes that AI is something of a moving target, saying “There is a long running joke in the AI research community that once we solve something then people decide that it’s not real intelligence!” Just a few decades ago, the capabilities of an AI assistant like Siri would have been considered AI. Albert continues, “People used to think that chess was the pinnacle of intelligence, until we beat the world champion. Then they said that we could never beat Go since that search space was too large and required ‘intuition’. Until we beat the world champion last year…”
Still, Albert, along with other AI researchers, only defines these sorts of systems as weak AI. Strong AI, on the other hand, is what most laymen think of when someone brings up artificial intelligence. A Strong AI would be capable of actual thought and reasoning, and would possess sentience and/or consciousness. This is the sort of AI that defined science fiction entities like HAL 9000, KITT, and Cortana (in Halo, not Microsoft’s personal assistant).
What actually constitutes a strong AI and how to test and define such an entity is a controversial subject full of heated debate. By all accounts, we’re not very close to having strong AI. But, another type of system, AGI (Artificial General Intelligence), is a sort of bridge between weak AI and strong AI. While AGI wouldn’t possess the sentience of a Strong AI, it would be far more capable than weak AI. A true AGI could learn from information presented to it, and could answer any question based on that information (and could perform tasks related to it).
While AGI is where most current research in the field of artificial intelligence is focused, the ultimate goal for many is still strong AI. After decades, even centuries, of strong AI being a central aspect of science fiction, most of us have taken for granted the idea that a sentient artificial intelligence will someday be created. However, many believe that this isn’t even possible, and a great deal of the debate on the course revolves around philosophical concepts regarding sentience, consciousness, and intelligence.
This discussion starts with a very simple question: what is consciousness? Though the question is simple, anyone who has taken an Introduction to Philosophy course can tell you that the answer is anything but. This is a question that has had us collectively scratching our heads for millennia, and few people who have seriously tried to answer it have come to a satisfactory answer.
Some philosophers have even posited that consciousness, as it’s generally thought of, doesn’t even exist. For example, in Consciousness Explained, Daniel Dennett argues the idea that consciousness is an elaborate illusion created by our minds. This is a logical extension of the philosophical concept of determinism, which posits that everything is a result of a cause only having a single possible effect. Taken to its logical extreme, deterministic theory would state that every thought (and therefore consciousness) is the physical reaction to preceding events (down to atomic interactions).
Most people react to this explanation as an absurdity — our experience of consciousness being so integral to our being that it is unacceptable. However, even if one were to accept the idea that consciousness is possible, and also that oneself possesses it, how could it ever be proven that another entity also possesses it? This is the intellectual realm of solipsism and the philosophical zombie.
Solipsism is the idea that a person can only truly prove their own consciousness. Consider Descartes’ famous quote “Cogito ergo sum” (I think therefore I am). While to many this is a valid proof of one’s own consciousness, it does nothing to address the existence of consciousness in others. A popular thought exercise to illustrate this conundrum is the possibility of a philosophical zombie.
A philosophical zombie is a human who does not possess consciousness, but who can mimic consciousness perfectly. From the Wikipedia page on philosophical zombies: “For example, a philosophical zombie could be poked with a sharp object and not feel any pain sensation, but yet behave exactly as if it does feel pain (it may say “ouch” and recoil from the stimulus, and say that it is in pain).” Further, this hypothetical being might even think that it did feel the pain, though it really didn’t.
This problem is central to the debate surrounding strong AI. If we can’t even prove that another person is conscious, how could we prove that an artificial intelligence was? John Searle not only illustrates this in his famous Chinese room thought experiment, but further puts forward the opinion that conscious artificial intelligence is impossible in a digital computer.
The Chinese room argument as Searle originally published it goes something like this: suppose an AI were developed that takes Chinese characters as input, processes them, and produces Chinese characters as output. It does so well enough to pass the Turing test. Does it then follow that the AI actually “understood” the Chinese characters it was processing?
Searle says that it doesn’t, but that the AI was just acting as if it understood the Chinese. His rationale is that a man (who understands only English) placed in a sealed room could, given the proper instructions and enough time, do the same. This man could receive a request in Chinese, follow English instructions on what to do with those Chinese characters, and provide the output in Chinese. This man never actually understood the Chinese characters, but simply followed the instructions. So, Searle theorizes, would an AI not actually understand what it is processing, it’s just acting as if it does.
It’s no coincidence that the Chinese room thought exercise is similar to the idea of a philosophical zombie, as both seek to address the difference between true consciousness and the appearance of consciousness. The Turing Test is often criticized as being overly simplistic, but Alan Turing had carefully considered the problem of the Chinese room before introducing it. This was more than 30 years before Searle published his thoughts, but Turing had anticipated such a concept as an extension of the “problem of other minds” (the same problem that’s at the heart of solipsism).
Turing addressed this problem by giving machines the same “polite convention” that we give to other humans. Though we can’t know that other humans truly possess the same consciousness that we do, we act as if they do out of a matter of practicality — we’d never get anything done otherwise. Turing believed that discounting an AI based on a problem like the Chinese room would be holding that AI to a higher standard than we hold other humans. Thus, the Turing Test equates perfect mimicry of consciousness with actual consciousness for practical reasons.
This dismissal of defining “true” consciousness is, for now, best to philosophers as far as most modern AI researchers are concerned. Trevor Sands (an AI researcher for Lockheed Martin, who stresses that his statements reflect his own opinions, and not necessarily those of his employer) says “Consciousness or sentience, in my opinion, are not prerequisites for AGI, but instead phenomena that emerge as a result of intelligence.”
Albert takes an approach which mirrors Turing’s, saying “if something acts convincingly enough like it is conscious we will be compelled to treat it as if it is, even though it might not be.” While debates go on among philosophers and academics, researchers in the field have been working all along. Questions of consciousness are set aside in favor of work on developing AGI.
Modern AI research was kicked off in 1956 with a conference held at Dartmouth College. This conference was attended by many who later become experts in AI research, and who were primarily responsible for the early development of AI. Over the next decade, they would introduce software which would fuel excitement about the growing field. Computers were able to play (and win) at checkers, solve math proofs (in some cases, creating solutions more efficient than those done previously by mathematicians), and could provide rudimentary language processing.
Unsurprisingly, the potential military applications of AI garnered the attention of the US government, and by the ’60s the Department of Defense was pouring funds into research. Optimism was high, and this funded research was largely undirected. It was believed that major breakthroughs in artificial intelligence were right around the corner, and researchers were left to work as they saw fit. Marvin Minsky, a prolific AI researcher of the time, stated in 1967 that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
Unfortunately, the promise of artificial intelligence wasn’t delivered upon, and by the ’70s optimism had faded and government funding was substantially reduced. Lack of funding meant that research was dramatically slowed, and few advancements were made in the following years. It wasn’t until the ’80s that progress in the private sector with “expert systems” provided financial incentives to invest heavily in AI once again.
Throughout the ’80s, AI development was again well-funded, primarily by the American, British, and Japanese governments. Optimism reminiscent of that of the ’60s was common, and again big promises about true AI being just around the corner were made. Japan’s Fifth Generation Computer Systems project was supposed to provide a platform for AI advancement. But, the lack of fruition of this system, and other failures, once again led to declining funding in AI research.
Around the turn of the century, practical approaches to AI development and use were showing strong promise. With access to massive amounts of information (via the internet) and powerful computers, weak AI was proving very beneficial in business. These systems were used to great success in the stock market, for data mining and logistics, and in the field of medical diagnostics.
Over the last decade, advancements in neural networks and deep learning have led to a renaissance of sorts in the field of artificial intelligence. Currently, most research is focused on the practical applications of weak AI, and the potential of AGI. Weak AI is already in use all around us, major breakthroughs are being made in AGI, and optimism about artificial intelligence is once again high.
Researchers today are investing heavily into neural networks, which loosely mirror the way a biological brain works. While true virtual emulation of a biological brain (with modeling of individual neurons) is being studied, the more practical approach right now is with deep learning being performed by neural networks. The idea is that the way a brain processes information is important, but that it isn’t necessary for it to be done biologically.
Trevor Sands does similar work with neural networks for Lockheed Martin. His focus is on creating “programs that utilize artificial intelligence techniques to enable humans and autonomous systems to work as a collaborative team.” Like Albert, Sands uses neural networks and deep learning to process huge amounts of data intelligently. The hope is to come up with the right approach, and to create a system which can be given direction to learn on its own.
Albert describes the difference between weak AI, and the more latest neural network approaches “You’d have vision people with one algorithm, and speech recognition with another, and yet others for doing NLP (Natural Language Processing). But, now they are all moving over to use neural networks, which is basically the same technique for all these different problems. I find this unification very exciting. Especially given that there are people who think that the brain and thus intelligence is actually the result of a single algorithm.”
Basically, as an AGI, the ideal neural network would work for any kind of data. Like the human mind, this would be true intelligence that could process any kind of data it was given. Unlike current weak AI systems, it wouldn’t have to be developed for a specific task. The same system that might be used to answer questions about history could also advise an investor on which stocks to purchase, or even provide military intelligence.
As it stands, however, neural networks aren’t sophisticated enough to do all of this. These systems must be “trained” on the kind of data they’re taking in, and how to process it. Success is often a matter of trial and error for Albert “Once we have some data, then the task is to design a neural network architecture that we think will perform well on the task. We usually start with implementing a known architecture/model from the academic literature which is known to work well. After that I try to think of ways to Boost it. Then I can run experiments to see if my changes Boost the performance of the model.”
The ultimate goal, of course, is to find that perfect model that works well in all situations. One that doesn’t require handholding and specific training, but which can learn on its own from the data it’s given. Once that happens, and the system can respond appropriately, we’ll have developed Artificial General Intelligence.
Researchers like Albert and Trevor have a good idea of what the Future of AI will look like. I discussed this at length with both of them, but have run out of time today. Make sure to join me next week here on Hackaday for the Future of AI where we’ll dive into some of the more interesting syllabus like ethics and rights. See you soon!
A software development degree that encompasses technical issues affecting software architecture, design, and implementation as well as process issues that address project management, planning, quality assurance, and product maintenance.
Students learn principles, methods, and techniques for the construction of complex and evolving software systems. The software engineering program encompasses both technical issues affecting software architecture, designs and implementation, as well as process issues that address project management, planning, quality assurance, and product maintenance. The program has a strong emphasis on teamwork and communication skills. The software engineering coursework maintains a balance between engineering design and software processes in both required and elective courses. As with other engineering fields, mathematics and natural science fundamentals are taken in the early years. A three-course sequence in a domain outside the program’s core requirements allows students to apply their software engineering skills to a variety of fields including science, engineering, and business. Finally, students complete a two-term senior project as the final demonstration of their abilities and preparation for immediate employment and long-term professional growth in software development organizations.
The department provides a variety of facilities where students collaborate on projects, polish their skills, and consult with faculty. Outfitted with the latest hardware and software technology, our facilities reflect our commitment to teamwork, interactive learning, and professional education. From the team rooms to the Collaboration Lab, our facilities are designed to support students and mimic a real-world software development environment.
Application Engineer; Associate Software Engineer; Embedded Software Engineer; Full Stack Developer; Global Technology Analyst; iOS Developer; Quality Assurance Engineer; Software Test Engineer; System Infrastructure Engineer; Web Developer
Apple; Constant Contact; Datto; Facebook; Google; HubSpot; IBM; Intuit; JPMorgan Chase & Co; L3harris; Lockheed Martin; Microsoft; Oracle; U.S. Department of Defense; Wayfair
96.3%
Outcome Rates*
Total percentage of graduates who have entered the workforce, enrolled in full-time graduate study, or are pursuing alternative plans (military service, volunteering, etc.).
83.70%
Knowledge Rate
*Total percentage of graduates for whom RIT has verifiable data, compared to national average knowledge rate of 41% per NACE.
Outcome | % of Students |
---|---|
Employed | 96.30% |
Full-time Graduate Study | 0% |
Alternative Plans | 0% |
The bachelor of science in software engineering is accredited by the Engineering Accreditation Commission of ABET.
What’s different about an RIT education? It’s the career experience you gain by completing cooperative education and internships with top companies in every single industry. You’ll earn more than a degree. You’ll gain real-world career experience that sets you apart. It’s exposure–early and often–to a variety of professional work environments, career paths, and industries.
Co-ops and internships take your knowledge and turn it into know-how. Your computing co-ops will provide hands-on experience that enables you to apply your computing knowledge in professional settings while you make valuable connections between classwork and real-world applications.
Students in the software engineering degree are required to complete three blocks (40 weeks) of cooperative education experience.
Linux, or GNU/Linux to acknowledge the large number of packages from the GNU OS that are commonly used alongside the Linux kernel, is a hugely popular open-source UNIX-like operating system.
The Linux OS kernel was first released in 1991 by Linus Torvalds, who still oversees kernel development as part of a large development community. Linux runs most of the cloud, most of the web, and pretty much every noteworthy supercomputer. If you use Android or one of its derivatives, your phone runs an OS with a modified Linux kernel, and Linux is embedded in everything from set-top boxes to autonomous cars. About 1.3 million people even use it to play games on Steam.
The world of Linux is a little more complicated than that of Windows or macOS, however. The open source nature of the Linux kernel and most of its applications allows anyone to freely modify them, which has resulted in a proliferation of different versions geared towards specific functions. Each of these distributions (or ‘distros’) uses the core Linux kernel and usually some GNU packages, and then a selection of software packages variously developed internally, taken from an upstream distro, or built from other open-source software.
Thus, Pop!_OS, for example, shares most of its software with Ubuntu, from which it descends. Ubuntu was originally a fork of Debian, and still contains a large percentage of the same codebase, regularly synced. All three also use the Linux kernel and numerous GNU software packages. You can roll your own distro if you like, customised to include whatever software your use case, philosophy or personal preference demands.
This can lead to complaints about fragmentation from both users and developers targeting the platform. However, many of these distributions are closely related and the underlying Linux operating system means that - much like its Unix-compatible POSIX-compliant relatives such as OpenBSD and macOS - once you understand the fundamentals of using GNU/Linux, you can apply that knowledge to any other Linux OS and be confident that everything will work more or less as you expect.
One last point to note is that while all Linux distros rely to some extent on voluntary contributions from a community of developers for their continued development and stability, some distros are backed by large commercial software development organisations, with Canonical (which develops Ubuntu) and Red Hat being key examples. Because they benefit from full-time corporate support and upkeep, these distros are often updated more frequently than at least some of their community rivals and may be better options for businesses who prioritise stability.
Although desktop Linux is a comparatively niche use case compared to the operating system’s ubiquitous server presence, it’s also the most fun and rewarding. An Ubuntu-based distro is currently your best bet if you want things to just work with a minimum of faff, but our favourites also include distros like Arch and Slackware, which actively encourage you to cultivate a deeper understanding of the OS underlying your desktop.
System 76’s Pop!_OS is one of the most comfortable choices for desktop Linux users who just want to get on with things. It’s based on Ubuntu, but strips out some of the more controversial elements, such Ubuntu’s default Snap package system, while adding useful features such as out-of-the-box support for Vulkan graphics. Its target audience is developers, digital artists, and STEM professionals.
Pop!_OS has a particularly pleasant graphical installation interface, designed to be quick and approachable. Its slick Cosmic desktop is based on GNOME 2, and vaguely reminiscent of macOS’s GUI layout. Future iterations are set to ship with an entirely new window manager, developed in-house by System76. System76 is also an OEM and makes laptop, desktop and server systems, all of which run the distro by default.
Arch is a thoroughly modern, rolling-release distro that nonetheless aims to provide a classic Linux experience, giving you as much hands-on control over your OS and its configuration as possible. You’ll have to choose your own desktop environment after installation, for example.
Its official repositories typically update quickly enough, but these exist alongside the bleeding-edge community-driven AUR (Arch User Repository) system, from which you can compile packages and install them as usual via the Pacman package manager. For those who don’t want to drive straight to the DIY ethos, Manjaro is the most popular of its derivatives, built to be more beginner-friendly, with a graphical installation interface and quality-of-life tools for driver management.
Based on Debian, Canonical’s Ubuntu Linux shares a significant chunk of its architecture and software, such as the friendly apt package management system. But it brings a lot of unique features to the table. Canonical’s Snap packages, for example, are designed to make it easy to package and distribute tamper-proof software with all necessary dependencies included, making it extremely well-suited to office workstations.
Ubuntu operates on a fast development cycle, particularly compared to Debian’s slow but stable releases. It also cheerfully provides proprietary drivers and firmware where needed, and, although Ubuntu itself is fully free, Canonical is here to make a profit, meaning that enterprise-grade support contracts are available, and the developers’ approach to security is tuned to the needs of business.
One of the longest-established distros, dating from 1993, Debian has numerous popular derivatives, from Ubuntu to Raspberry Pi OS. It introduced the widely-used and much-cloned apt package management system for easy software installation and removal, and to this day prioritises free, open and non-proprietary drivers and software, as well as wide-ranging hardware support.
While Ubuntu and Red Hat are tailored to enterprise, Debian remains a firmly non-profit project dedicated to the principles of the free software movement, making it a good choice for GNU/Linux purists who want a stable OS that’s nonetheless comfortable to use, with a variety of popular GUIs to choose from.
Another 1993-vintage distro, Slackware (no relation to the popular collaboration platform) is still very much alive and kicking, despite a website whose front page was last updated in 2016. That’s set to change soon with the imminent release of Slackware 15.0, which those who want the latest features can already access in the form of Slackware-Current.
As you might gather from the slow release cycle, Slackware is built for long-term stability. It also maintains several classic Linux features that other distros have abandoned, making it a popular choice with many old-school users for that very reason. It uses a BSD-style file layout and hands-on ncurses installation interface, is deliberately “UNIX-like” and, most notably, eschews Red Hat’s now-ubiquitous systemd, so you’ll be using init rather than systemctl commands to manage services. Refreshingly, it boots to the command line by default, but you can choose from a range of desktop environments. You’ll probably also want to add a package manager such as swaret.
If you want a “pure” and slightly old-school Linux experience, Slackware is an excellent choice and a great way of getting a handle on the underpinnings of Linux as an OS. It’ll run on almost anything, from a 486 to a Raspberry Pi, to your latest gaming PC, with support for x86, amd64, and ARM CPUs.
Not every PC is an eight-core gaming behemoth with 32GB RAM and the latest graphics card. But versatile, lightweight Linux distributions mean that an underpowered netbook or Windows XP-era PC can be brought back into use as a genuinely functional home computer, with all the security updates and modern software support that you’ll need.
One of the best-known lightweight Linuxes, Puppy Linux isn’t a single distribution, but rather a collection of different Linux distros, each set up to provide a consistent user experience when it comes to look, feel, and features. Official versions are currently available based on Ubuntu, Rasbian, Debian, and Slackware, with both 32- and 64-bit versions available for most of these.
They’re all designed to be easy to use for even non-technical people, small - around 400MB or less in size - and equipped with everything you’ll need to make a PC functional. Having to choose your Puppy can be a little confusing, but there’s a guide to help you through it. Although 32-bit CPUs are supported, you’ll want an Athlon processor or later for more latest versions to be viable. For more modern systems, note that Puppy doesn't support UEFI, either, so switch your BIOS into legacy mode before installation. ARM architecture is also supported in the form of Raspberry Pi.
Ubuntu MATE - pronounced mah-tay like the hot beverage - isn’t the absolute lightest-weight distro around, requiring at least 1GB RAM and a 64-bit Core 2 Duo equivalent processor. It is nonetheless a superb choice if you need to bring an elderly home PC or underpowered laptop back into viable use.
The MATE desktop environment is popular with Windows XP veterans and comes with tweak tools already installed for easy customisation. And as it’s an Ubuntu variant, you get that distro’s wide-ranging repositories, excellent hardware support and easy gaming, with a user interface that’s a bit lighter and more comfortable for Linux newcomers.
Although a 32-bit x86 distribution is no longer available, you will find both 64- and 32-bit versions for Raspberry Pi, and versions specifically designed for a small range of pocket PCs.
A few versions of this ultra-lightweight distro are available to download: A fully functional command line OS image (16MB), a GUI version (21MB), and an installation image (163MB) that’ll support non-US keyboards and wireless networking, as well as giving you a range of window managers to choose from.
As you’d assume from its minuscule file size, Tiny Core doesn’t come with much software by default, but its repositories include the usual range of utilities, browsers and office software that you’ll need to make use of your PC. You can run it on a USB drive, CD, or stick it on a hard disk, and it’ll work on any x86 or amd64 system with at least 46 megabytes of RAM and a 486DX processor, although 128MB RAM and a P2 are recommended. Arm builds are also available, including Raspberry Pi support.
In practice, most distros that are good on the desktop are entirely adequate for use as part of your enterprise server infrastructure, although you’ll probably want to install a version without a graphical desktop for most use cases. If you operate an enterprise server, you’ll want something with stable Long Term Support versions, responsive security updates, and that’s familiar enough to make it easy to troubleshoot. Right now, Ubuntu and Red Hat derivatives are particularly solid choices.
Red Hat Enterprise Linux is synonymous with big business. Although RHEL’s source code is, of course, open, it uses significant non-free, trademarked and proprietary elements, and updates that you need a subscription to access. Red Hat emphasises security, hands-on subscriber support and regulatory-compliant technologies and certification. Its developers also put a lot of effort into its enterprise-grade GUI, which can be more comfortable for those who’d rather not do all their configuration at the command line.
Red Hat itself - now a subsidiary of IBM - has contributed important elements to Linux as a whole. With fully free community Red Hat derivative CentOS’s move to give up Long Term Support (LTS) versions in favour of a rolling release model (via CentOS Stream), RHEL is perhaps the best option for consistent, long-term stability for anyone who requires a Red Hat based Linux distribution for business use.
Fortunately for SMBs, the no-cost version of RHEL has been expanded to compensate for the loss of traditional CentOS, allowing individual developers and small teams with up to 16 production systems to get a free subscription, providing access to the distro’s update repositories.
Amazon’s own Red Hat derivative, Amazon Linux, is designed to work optimally on the cloud service provider’s platform. It supports all features of Amazon’s EC2 instances and its repositories include packages designed to seamlessly integrate with AWS’s many other services. Long Term Support versions are available, making it an appealing CentOS replacement, as long as you’re happy moving your machines to the AWS cloud.
Although its VM image and containerised versions are designed first and foremost for deployment on AWS, you can download VM images for on-premises use if you want them.
While Amazon Linux is based on CentOS, its successor, Amazon Linux 2022, is built on Fedora, but respecc’d as a server distro.
While most desktop Linuxes are just as capable as servers, we’re going out of our way to recommend Ubuntu for both, as it’s incredibly easy to roll out a wide range of secure and fully-functional servers from its packages. It’s also free and conspicuously quick when it comes to security updates.
Its Long Term Support versions get five-year security and ten-year extended maintenance guarantees. As well as x86 architecture, it’s available for ARM, as well as IBM’s POWER server and Z mainframe platforms, although its legacy hardware support pales in comparison to Debian’s.
Ubuntu is entirely free for everyone, but you can subscribe to Canonical’s commercial support if you need it, and Ubuntu’s popularity means that it’s widely supported by third-party firms and community forums.
Some people use Linux because it’s free, or because it’s fun to tinker with, or because they don’t like being beholden to a large corporate entity. Others use Linux for security: either to maintain it or to test it. There are a number of distros designed for those who want to lock down their privacy and security at all costs, as well as distros built for infosec professionals who need to make use of more specialised tools.
If you work on other people’s computers or on public networks and you’d like to minimise the risk of your identity, communications and data being compromised, TAILS is the OS-on-a-stick for you.
Based on Debian, TAILS’ most distinctive feature is that it routes all internet traffic via TOR by default and, when used as a live distro, it lives on an 8GB+ USB stick and runs in RAM, leaving no trace on the host PC unless you deliberately choose to do so. The 1.2GB live image includes a GNOME 3 desktop environment, with all the conveniences of a modern desktop Linux.
Kali is not your everyday desktop distro - it isn't recommended fof all use cases. But for those that are looking for pen testing and red-team-oriented security functions its a great choice. It's based in Debain and it comes with a lightweight Kali Linux Desktop environment by default. That also includes GNOME and KDE Plasma versions as well.
The main attraction here for users is the ready-to-go security tools. There are a wide range of 32- and 64-bit images for various platforms and use cases, as well as password-cracking VoIP research and RFID exploitation. In total, Kali comes with 600 security tools, though there are very few use cases that will need all of them. There is also specialist hardware support, such as Kali NetHunter for Android and a few ARM images, like Apple M1 architecture.
To find out your exact requirements, such as storage (2GB to 20GB) and also which security tools you need, there's a guide. If you opt for the basic installation, you will be able to use metapackages to pull down exactly the tools you need.
ParrotOS may be a single distro, but there are two types of it; both are based on Debian's Testing tree and they're also available with the MATE, KDE and XFCE desktop services.
The Home Edition (shown above) is a lightweight OS for daily use. It has a specific focus on privacy and operates as a quick-assembly pen testing tool, for those that need them.
There are other services that provide greater privacy, such as TAILS amnesiac live distro, but ParrotOS comes with some decent pre-installed capabilities, such as secure file sharing, cryptography, and end-to-end encrypted comms and anonsurf for those that want to proxy all online traffic through the TOR network.
To be that little bit more secure, there is ParrotOS Security Edition which is a Kali Linux alternative. This comes with pen testing and digital forensics tools, such as network sniffers and port scanners, but also car hacking features as well. ParrotOS is a community project so there are no enterprise options like you would find with Kali. But it is very close to the GNU/Linux style and there is a fairly large community of users to get support or advice from.
The COO's pocket guide to enterprise-wide intelligent automation
Automating more cross-enterprise and expert work for a better value stream for customers
Introducing IBM Security QRadar XDR
A comprehensive open solution in a crowded and confusing space
2021 Gartner critical capabilities for data integration tools
How to identify the right tool in support of your data management solutions
Research seminar for doctoral and Master's students to listen to researchers from academia, industry, and government of research-related syllabus in civil and environmental engineering. Invited speakers will present latest research advances in fields of environmental engineering, geotechnical engineering, structural engineering and transportation engineering. Attendance is mandatory for doctoral and MS students with thesis option. Thesis requirements and research methods will be introduced in various talks.
Computer Based Analysis of Structures (Formerly 14.503)The course is an introduction to the finite element displacement method for framed structures. It identifies the basic steps involved in applying the displacement method that can be represented as computer procedures. The course covers the modeling and analysis of 2-dimensional and 3-dimensional structures, such as cable-stayed structures, arches, and space trusses, space frames, shear walls, and so on. The analysis is done for both static and dynamic loading. The study is done by using MATLAB, GTSTRUDL, and Mathcad software.
Advanced Strength Of Material (Formerly 14/10.504)Stress and strain at a point; curved beam theory, unsymmetrical bending, shear center, torsion of non-circular sections; theories of failure; selected syllabus in solid mechanics.
Concrete Materials (Formerly 14.505)This course introduces fundamental and advanced syllabus on the properties of concrete materials. Fundamental syllabus include the formation, structure, mechanical behavior, durability, fracture, and deterioration of concrete. Theoretical treatments on the deformation, fracture and deterioration of concrete are also addressed. Advanced syllabus include the electromagnetic properties of concrete, high performance concrete (HPC), high-strength concrete (HSC), fiber-reinforced concrete, other special concretes, and the green construction of concrete.
Pre-Req: 14.310 Engineering Materials.
Practice of Structural Engineering (Formerly 14.508)This course covers the practice of structural engineering as it deals with the design of structures such as buildings and bridges, the identification of loads, and design variables, and design detailing for concrete and steel structures. The emphasis will be placed on the use and interpretation of the ACI318-09, AISD and AASHTO codes and the GTSTRUDL software.
Inspection and Monitoring of Civil Infrastructure (Formerly 14.511)In this course, principles and applications of inspection and monitoring techniques for the condition assessment of aged/damaged/deteriorated civil infrastructure systems such as buildings, bridges, and pipelines, are introduced. Current nondestructive testing/evaluation (NDT/E) methods including optical, acoustic/ultrasonic, thermal, magnetic/electrical, radiographic, microwave/radar techniques are addressed with a consideration of their theoretical background. Wired and wireless structural health monitoring (SHM) systems for civil infrastructure are also covered. Applications using inspection and monitoring techniques are discussed with practical issues in each application.
Structural Stability (Formerly 14.512)This course provides a concise introduction to the principles and applications of structural stability for their practical use in the design of steel frame structures. Concepts of elastic and plastic theories are introduced. Stability problems of structural members including columns, beam-columns, rigid frames, and beams are studied. Approaches in evaluating stability problems, including energy and numerical methods, are also addressed.
Cementitious Materials for Sustainable ConcreteThis course is designed for introducing advanced syllabus in cement hydration chemistry, materials characterization and concrete sustainability. Advanced syllabus in chemistry of commonly used cementitious materials, micro-structure, mechanical properties, durability ad sustainability will be offered. Students will learn and practice to characterize and analyze the roles of chemical admixtures and supplementary cementitious materials in concrete property improvement. Chemical issues involved in the engineering behavior of concrete will be offered. A service-learning project about sustainable concrete will be provided. Emerging syllabus such as self-healing concrete, self-consolidating concrete, mart concrete, 3D concrete printing and ultra-high performance concrete will also be covered.
Pre-req: CIVE.3100 Engineering Materials, or CIVE.5050 Concrete Materials, or Permission of Instructor.
Reliability Analysis (Formerly 14.521)A review of the elementary principles of probability and statistics followed by advanced syllabus including decision analysis, Monte Carlo simulation, and system reliability. In-depth quantitative treatment in the modeling of engineering problems, evaluation of system reliability, and risk-benefit decision management.
Geotechnical and Environmental Site Characterization (Formerly 14.527)This course is designed to give students a comprehensive understanding of various site investigation and site assessment technologies employed in geotechnical and environmental engineering. The course begins with introduction to site investigation planning and various geophysical methods including: seismic measurements, ground penetrating radar, electrical resistivity, electromagnetic conductivity, time domain reflectometry. Drilling methods for soil, gas and ground water sampling; decontamination procedures; and long term monitoring methods are studied. Emphasis in this course is placed on conventional and state-of-the-art in situ methods for geotechnical and environmental site characterization: standard penetration test, vane shear test, dilatometer test, pressuremeter test and cone penetration tests. Modern advances in cone penetrometer technology, instrumented with various sensors (capable of monitoring a wide range of physical and environmental parameters: load, pressure, sound, electrical resistivity, temperature, PH, oxidation reduction potential, chemical contaminants) are playing a major role in site characterization. Principles underlying these methods along with the interpretation of test data will be covered in detail. The course will also look into emerging technologies in the area of site characterization. (3-0)3
Drilled Deep Foundations (Formerly 14.528)Design and analyses of drilled deep foundations including: Deep foundations classification and historical perspective. Cost analysis of foundations. Construction methods and monitoring techniques. Static capacity and displacement analyses of a single drilled foundation and a group under vertical and lateral loads. Traditional and alternative load test methods - standards, construction, interpretation, and simulation. Integrity testing methods. Reliability based design using the Load and Resistance Factor design (LRFD) methodology application for drilled deep foundations.
Pre-req: CIVE.5310 Advanced Soil Mechanics, or Permission of Instructor.
Engineering with Geosynthetics (Formerly 14.529)Rigorous treatment in the mechanism and behavior of reinforced soil materials. Laboratory and insitu tests for determining the engineering properties of geosynthetics (geotextiles, geomembranes, geogrids and geocomposites). Design principles and examples of geosynthetics for separation, soil reinforcement and stabilization, filtration and drainage.
Driven Deep Foundations (Formerly 14.530)design and analyses of driven deep foundations including: Deep foundations classification and historical perspective. Effects of pile installation. Static capacity and settlement analysis of a single pile and a pile group under vertical loads. Insight of pile resistance including soil behavior and interfacial friction. Driven pile load test standards, construction, interpretation, and simulation. Dynamic analysis of driven piles, the wave equation analysis, dynamic measurements during driving and their interpretation. Reliability based design using the Load and Resistance Factor design (LRFD) methodology application for driven deep foundations.
Pre-req: CIVE.5310 Advanced Soil Mechanics, or Permission of Instructor.
Advanced Soil Mechanics (Formerly 14.531)Theories of soil mechanics and their application. Drained and undrained stress-strain and strength behavior of soils. Lateral earth pressures, bearing capacity, slope stability, seepage and consolidation. Lab and insitu testing.
Theoretical & Numerical Methods in Soil Mechanics (Formerly 14.532)Geotechnical practice employs computer programs that incorporate numerical methods to address problems of stability, settlement, deformation, and seepage. These methods are based on theoretical understanding of the behavior of soils, and correct use of commercial software requires that the engineer understand theoretical bases of the numerical algorithms and how they work. This course addresses the description of stress and strain in the context of geotechnical engineering and the basic concepts of numerical and computational methods, including discretization errors, computational procedures appropriate to different classes of problem, and numerical instability. It will then apply the insights to the three major problems of geotechnical analysis: settlement, stability, and fluid flow.
Pre-req: MATH 2360 Eng. Differential Equations, and CIVE 3300 Soil Mechanics.
Advanced Foundation Engineering (Formerly 14.533)Design and analysis of shallow foundations, excavations and retaining structures including: site exploration, bearing capacity and settlement theories, earth pressures, braced and unbraced excavations, rigid and flexible retaining structures, reinforced earth, dewatering methods and monitoring techniques.
Soil Dynamics and Earthquake Engineering (Formerly 14.534)This course addresses the dynamic properties of soils and basic mechanical theory of dynamic response. It will apply these results to analysis and design of dynamically loaded foundations. A basic understanding of earthquakes - where they occur, their quantitate description, how the complicated patterns of motions are captured by techniques such as the response spectrum, and how engineers design facilities to withstand earthquakes, will be addressed. In particular, the course will consider three syllabus of current professional and research interest: probabilistic seismic hazard analysis (PHSA), soil liquefaction, and seismically induced displacements. The emphasis will be on geotechnical issues, but some time will be devoted to structural considerations in earthquake resistant design.
Soil Engineering (Formerly 14.536)The study of soil as an engineering material, and its use in earth structures (e.g. dams, road embankments), flow control, and compacted fills. Stability of natural and man made slopes, soil reinforcement and stabilization.
Experimental Soil Mechanics (Formerly 14.537)Application of testing procedures to the evaluation of soil type and engineering properties. Testing for classification, permeability, consolidation, direct and triaxial shear and field parameters. The technical procedures are followed by data analysis, evaluation and presentation. Critical examination of standard testing procedures, evaluation of engineering parameters, error estimation and research devices.
Soil BehaviorStudy of the physico-chemical and mechanical behavior of soil. syllabus include: soil mineralogy, formation, composition, concepts of drained and undrained stress-strain and strength behavior, frozen soils.
Ground Improvement (Formerly 14.539)Design and construction methods for strengthening the properties and behavior of soils. Highway embankments, soil nailing, soil grouting, landslide investigation and mitigation, dynamic compaction, stone columns.
Urban Transportation Planning (Formerly 14.540)Objectives and procedures of the urban transportation planning process. Characteristics and current issues of urban transportation in the United States (both supply and demand). Techniques of analysis, prediction and evaluation of transportation system alternatives. Consideration of economic, environmental, ethical, social and safety impacts in the design and analysis of transportation systems.
Advanced Highway Geometric DesignDevelopment of the principals of modern roadway design while addressing context specific design requirements and constraints. syllabus will include guidelines for highway design, design and review of complex geometry, geometric design to address safety and operational concerns, multi-modal design for signalized and un-signalized intersections, complete streets design concepts, and superelevation. Course-work will also include principals to present transportation designs to the public, transportation advocates, and private clients.
Pre-req: CIVE.3400 Transportation Engineering, or Permission of Instructor.
Traffic Engineering (Formerly 14.541)Engineering principles for safe and efficient movement of goods and people on streets and highways, including aspects of (a) transportation planning; (b) geometric design; (c) traffic operations and control; (d) traffic safety, and; (e) management of transportation facilities. syllabus include: traffic stream characteristics; traffic engineering studies; capacity and level-of-service analysis; traffic control; simulation of traffic operations; accident studies; parking studies; environmental impacts.
Hazardous Materials TransportationHazmat transportation, safety and security are a convergence of operations, policies and regulation, and planning and design. This course will address the multimodal operations, vessels, technologies, packaging and placarding involved in the safe and secure transportation of hazmat. Safety and security rules, regulations, emergency preparedness and response, industry initiatives and programs, and U.S. government agencies governing hazmat transportation will be included, as well as international impacts on hazmat transportation safety and security.
Transportation Network Analysis (Formerly 14.542)This course is to introduce engineering students to basic transportation network analysis skills. syllabus covered include fundamentals of linear and nonlinear programming, mathematical representations of transportation networks, various shortest path algorithms, deterministic user equilibrium traffic assignment, stochastic user equilibrium traffic assignment, dynamic traffic assignment, heuristic algorithms for solving traffic assignment problems, and transportation network design.
Pre-req: CIVE 3720 Civil Engineering Systems and CIVE 3400 Transportation Engineering.
Traffic Principles for Intelligent Transportation Systems (Formerly 14.543)The objective of this course is to introduce the student to the traffic principles that are pertinent for the planning, design and analysis of Intelligent Transportation Systems (ITS). The course is oriented toward students that come from different disciplines and who do not have previous background in traffic or transportation principles. It is designed as an introductory course that will enable the student to pursue more advanced courses in transportation systems subsequently.
Transportation Economics and Project Evaluation (Formerly 14.544)The course offers an overview of the fundamental principles of transportation economics. Emphasizes theory and applications concerning demand, supply and economics of transportation systems. Covers syllabus such as pricing, regulation and the evaluation of transportation services and projects. Prerequisites: Students should have knowledge of transportation systems and basic microeconomics.
Public Transit Plan and Design (Formerly 14.545)Planning and design of public transportation systems and their technical, operational and cost characteristics. Discussion of the impact of public transportation on urban development; the different transit modes, including regional and rapid rail transit (RRT), light rail transit (LRT), buses, and paratransit, and their relative role in urban transportation; planning, design, operation and performance of transit systems (service frequency and headways, speed, capacity, productivity, utilization); routes and networks; scheduling; terminal layout; innovative transit technologies and their feasibility.
Pavement Design (Formerly 14.546)Fundamentals of planning, design, construction and management of roadway and airport pavements. Introduction to the theory and the analytical techniques used in pavement engineering. Principal syllabus covered: pavement performance, analysis of traffic, pavement materials; evaluation of subgrade; flexible and rigid pavement structural analysis; reliabilitydesign; drainage evaluation; design of overlays; and pavement distresses.
Airport Planning and Design (Formerly 14.547)Planning and design of civil airports. Estimation of air travel demand. Aircraft characteristics related to design; payload, range, runway requirements. Analysis of wind data, runway orientation and obstruction free requirements. Airport configuration, aircraft operations, and capacity of airfield elements. Design of the terminal system, ground access system, and parking facilities.
Traffic Management and Control (Formerly 14.548)The course presents modern methods of traffic management, traffic control strategies and traffic control systems technology. Main syllabus covered, include: transportation systems management (TSM); traffic control systems technology; control concepts - urban and suburban streets; control and management concepts - freeways; control and management concepts - integrated systems; traveler information systems; system selection, design and implementation; systems management; ITS plans and programs. The course will also include exercises in the use and application of traffic simulation and optimization models such as: CORSIM, TRANSYT and MAXBAND/ MULTIBAND.
Traffic Flow and Emerging Transportation Technologies (Formerly 14.549)Traffic flow theories seek to describe through precise mathematical models (a) the interactions between vehicles and the roadway system and (b) the interactions among vehicles. This course covers both conventional human-driven vehicles and the emerging connected and automated vehicles. Such theories form the basis of the models and procedures used in design and operational analysis of streets and highways. In particular, the course examines the fundamental traffic flow characteristics and the flow-speed-density relationship, as well as time and space headway, string stability, traffic flow stability, popular analytical techniques for traffic stream modeling at both microscopic and macroscopic levels, shock wave analysis, and simulation modeling of traffic systems.
Pre-req: CIVE.3400 Transportation Engineering, or Permission of Instructor.
Behavior of Structures (Formerly 14.550)Classical and matrix methods of structural analysis applied to complex plane trusses. Elementary space truss analysis. Elementary model analysis through the use of influence lines for indeterminate structures. The digital computer and problem oriented languages as analytical tools.
Advanced Steel Design (Formerly 14.551)Elastic and plastic design of structural steel systems, residual stresses, local buckling, beam-columns, torsion and biaxial bending, composite steel-concrete members, load and resistance factor design.
Design of Concrete Structures (Formerly 14.552)The main objective of this course is to expand the students' knowledge and understanding of reinforced concrete behavior and design. Advanced syllabus at material, element, and system level are built on quick reviews of undergraduate level knowledge and are related to current design codes.
Wood Structures (Formerly 14.553)Review of properties of wood, lumber, glued laminated timber and structural-use panels. Review of design loads and their distribution in wood-frame buildings. Design of wood members in tension, compression and bending; and design of connections.
Finite Element Analysis (Formerly 14.556)Finite element theory and formulation, software applications, static and dynamic finite element analysis of structures and components.
Structural Dynamics (Formerly 14.557)Analysis of typical structures subjected to dynamic force or ground excitation using direct integration of equations of motion, modal analysis and approximate methods.
Bridge Design (Formerly 14.558)Analysis and design of modern bridges, using computer software for the 3-D modeling of trial bridges under dead and live loading and seismic excitation. AASHTO specifications are used for the design of superstructures and substructures (abutments, piers, and bearings) under group load combinations.
Design of Masonry Structures (Formerly 14.559)Fundamental characteristics of masonry construction. The nomenclature, properties, and material specifications associated with basic components of masonry. The behavior of masonry assemblages subjected to stresses and deformations. Design of un-reinforced and reinforced masonry structures in accordance with current codes.
Physical Chemical Treatment Processes (Formerly 14.561)Course provides a theoretical understanding of various chemical and physical unit operations, with direct application of these operations to the design and operation of water and wastewater treatment processes. syllabus include colloid destabilization, flocculation, softening, precipitation, neutralization, aeration and gas transfer, packed & tray towers, oxidation, disinfection, reverse osmosis, ultrafiltration, settlings, activated carbon adsorption, ion exchange, and filtration.
Physical and Chemical Hydrology Geology (Formerly 14.562)Well hydraulics for the analysis of groundwater movement. A review of the processes of diffusion, dispersion, sorption, and retardation as related to the fate and transport of organic contaminants in groundwater systems. Factors influencing multi-dimensional contaminant plume formation and migration are addressed. It is the goal of this course to provide environmental scientists and engineers with the technical skills required to understand groundwater hydrology and contaminant transport within aquifers. A term paper and professional presentation in class regarding a relevant course is required.
Hydrology & Hydraulics (Formerly 14.564)This course utilizes engineering principles to quantitatively describe the movement of water in natural and manmade environmental systems. syllabus include: hydrologic cycle, steam flow and hydrographs, flood routing, watershed modeling, subsurface hydrology, and probability concepts in hydrology, hydraulic structures, flow in closed conduits, pumps, open channel flow, elements of storm and sanitary sewer design will be addressed.
Environmental Applications and Implications of NanomaterialsThis course will cover (I) novel properties, synthesis, and characterization of nanomaterials; (II) environmental engineering applications of nanomaterials, with an emphasis on nano-enabled water and wastewater treatment technologies such as membrane processes, adsorption, photo-catalysis, and disinfection; and (III) Health and Environmental impacts of nanomaterials, focusing on potential mechanisms of biological uptake and toxicity.
Environmental Aquatic Chemistry (Formerly 14.567)This course provides environmental understanding of the principles of aquatic chemistry and equilibria as they apply to environmental systems including natural waters, wastewater and treated waters.
Environmental Fate and Transport (Formerly 14.568)The fate of contaminants in the environment is controlled by transport processes within a single medium and between media. The similarities in contaminant dispersion within air, surface water and groundwater will be emphasized. Interphase transport processes such as volatilization and adsorption will then be considered from an equilibrium perspective followed by the kinetics of mass transfer across environmental interfaces. A professional presentation of a select paper or group of paper concerning a course course is required.
Micropollutants in the EnvironmentThis course focuses on the generation, fate and transformation, transport, and the impacts of micropollutants in the environment, with emphasis on soil and water matrices. syllabus will include nanomaterials and organic micropollutants such as pharmaceuticals, antimicrobials, illicit drugs, and personal care products. Course delivery will be a combination of lectures, experimental analysis, and discussions of assigned memorizing materials.
Wastewater Treatment and Storm Water Management Systems (Formerly 14.570)The era of massive subsidies for construction of sanitary sewers and centralized, publicly operated treatment works (POTWs) has passed. Non - point pollution from sources such as onsite disposal systems has become a major focus of concern in our efforts to protect and Boost ground and surface water quality. Much of the new construction in areas not already served by centralized collection and treatment must use the alternative technologies. This course is design oriented. The variously available technologies are studied in depth. Students evaluate various technologies as they may be applied to a complex problem for which information is available, and develop an optimum problem solution.
Surface Water Quality Modeling (Formerly 14.571)Theory and application of surface water quality modeling will be combined interactively throughout the course. Data from a stream will be utilized in order to bring a public domain model into operation
Marine and Coastal Processes (Formerly 14.572)This course focuses on the coastal dynamics of currents, tides, waves, wave morphology and their effects on beaches, estuaries, mixing and sediment transport/accretion processes. Generalized global aspects of atmospheric and hydrospheric interactions with ocean currents are also presented.
Solid Waste Engineering (Formerly 14.573)Characterization, handling and disposal of municipal, industrial and hazardous wastes. Technologies such as landfills, recycling, incineration and composting are examined. A term paper and professional presentation in class regarding a relevant course is required.
Groundwater Modeling (Formerly 14.575)Groundwater Modeling is designed to present the student with fundamentals, both mathematical and intuitive, of analytic and numeric groundwater modeling. An introductory course in groundwater hydrology is a prerequisite for Groundwater Modeling, and the student should be familiar with IBM computers in running text editors and spreadsheets. The semester will start with basic analytic solutions and image theory to aid in the development of more complex numeric models. Emphasis will then switch to numeric ground water flow models (MODFLOW) and the use of particle tracking models (GWPATH) to simulate the movement of solutes in ground water. The numeric modeling process will focus on forming the problem description, selecting boundary conditions, assigning the model parameters, calibrating the model, and preparing the model report. Course syllabus include: Analytic Methods, Numeric Methods, Conceptual Model and Grid design, Boundary Conditions, Sources, and Sinks, and Particle Tracking.
GIS Applications in Civil and Environmental Engineering (Formerly 14.576)This course is to introduce students to the basic concepts of Geographic Information Systems (GIS) and GIS applications in Civil and Environmental Engineering. syllabus to be covered include GIS data and maps, queries, map digitization, data management, spatial analysis, network analysis, geocoding, coordination systems and map projections, editing. Examples related to transportation, environmental, geotechnical and structural engineering will be provided to help students better understand how to apply GIS in the real world and gain hands-on experience. This course will consist of lectures and computer work.
Biological Wastewater Treatment (Formerly 14.578)Course covers the theoretical and practical aspects of biological wastewater treatment operations. syllabus include kinetics of biological growth and substrate utilization, materials balance in chemostats and plug flow reactors, activated sludge process analysis and design, sedimentation and thickening, nitrification and denitrification, phosphorus removal, fixed-film processes analysis and design, anaerobic processes analysis and design, aerated lagoons and stabilization ponds, and natural treatment systems.
Green and Sustainable Civil Engineering (Formerly 14.579)This course focuses on various green and sustainable materials and technologies applicable to five areas of civil engineering: environmental engineering, water resources engineering, structural engineering, transportation engineering, and geotechnical engineering. This course also covers current green building laws and introduces fundamentals of entrepreneurship and patent/copyright laws.
Engineering Systems Analysis (Formerly 14.581)The course presents advanced methods of operations research, management science and economic analysis that are used in the design, planning and management of engineering systems. Main syllabus covered, include: the systems analysis methodology, optimization concepts, mathematical programming techniques, Network analysis and design, project planning and scheduling, decision analysis, queuing systems, simulation methods, economic evaluation. The examples and problems presented in the course illustrate how the analysis methods are used in a variety of systems applications, such as: civil engineering, environmental systems, transportation systems, construction management, water resources, urban development, etc.
Transportation Safety (Formerly 14.585)Transportation Safety goes beyond the accepted standards for highway design. Providing a safe and efficient transportation system for all users is the primary objective of federal, state, and local transportation agencies throughout the nation. This class addresses fundamentals of highway design and operation, human factors, accident investigation, vehicle characteristics and highway safety analysis.
Hazardous Waste Site Remediation (Formerly 14.595)This course focuses on the principles of hazardous waste site remediation (with an emphasis on organic contaminants) using physical, chemical or biological remediation technologies. Both established and emerging remediation technologies including: bioremediation, intrinsic remediation, soil vapor extraction (SVE), in situ air sparging (IAS), vacuum- enhanced recovery (VER), application of surfactants for enhanced in situ soil washing, hydraulic and pneumatic fracturing, electrokinetics, in situ reactive walls, phytoremediation, and in situ oxidation, will be addressed. A term paper and professional presentation in class regarding a relevant course is required.
Grad Industrial Exposure (Formerly 14.596)There is currently no description available for this course.
Special syllabus in Civil Engineering (Formerly 14.651)Course content and credits to be arranged with instructor who agrees to direct the student.
Civil Engineering Individual Project (Formerly 14.693)There is currently no description available for this course.
Supervised Teaching in Civil Engineering (Formerly 14.705)There is currently no description available for this course.
Masters Project in Civil Engineering (Formerly 14.733)There is currently no description available for this course.
Masters Project in Civil Engineering (Formerly 14.736)There is currently no description available for this course.
Master's Thesis-Civil Engineering (Formerly 14.741)There is currently no description available for this course.
Master's Thesis - Civil Engineering (Formerly 14.743)There is currently no description available for this course.
Master's Thesis - Civil Engineering (Formerly 14.746)There is currently no description available for this course.
Master's Thesis - Civil Engineering (Formerly 14.749)There is currently no description available for this course.
Doctoral Dissertation (Formerly 14.751)There is currently no description available for this course.
Independent Study in Civil Engineering (Formerly 14.752)There is currently no description available for this course.
Doctoral Dissertation (Formerly 14.753)There is currently no description available for this course.
Doctoral Dissertation/Civil Engineering (Formerly 14.756)There is currently no description available for this course.
Doctoral Dissertation (Formerly 14.757)There is currently no description available for this course.
Doctoral Dissertation (Formerly 14.759)There is currently no description available for this course.
Continued Graduate ResearchThere is currently no description available for this course.
Continued Graduate Research (Formerly 14.763)There is currently no description available for this course.
Continued Graduate Research (Formerly 14.766)There is currently no description available for this course.
Continued Graduate Research (Formerly 14.769)There is currently no description available for this course.
Curricular Practical Training for Engineering Doctoral CandidatesCurricular Practical Training (CPT) is a training program for doctoral students in Engineering. Participation in CPT acknowledges that this an integral part of an established curriculum and directly related to the major area of study or thesis.
General Chemistry for Engineers
This rigorous course is primarily for, but not limited to, engineering students. syllabus include an introduction to some basic concepts in chemistry, stoichiometry, First Law of Thermodynamics, thermochemistry, electronic theory of composition and structure, and chemical bonding. The lecture is supported by workshop-style problem sessions. Offered in traditional and online format. Lecture 3 (Fall, Spring).
Freshman Practicum
EE Practicum provides an introduction to the practice of electrical engineering including understanding laboratory practice, identifying electronic components, operating electronic test and measurement instruments, prototyping electronic circuits, and generating and analyzing waveforms. Laboratory exercises introduce the student to new devices or technologies and an associated application or measurement technique. This hands-on lab course emphasizes experiential learning to introduce the student to electrical engineering design practices and tools used throughout the undergraduate electrical engineering program and their professional career. Laboratory exercises are conducted individually by students using their own breadboard and components in a test and measurement laboratory setting. Measurements and observations from the laboratory exercises are recorded and presented by the student to a lab instructor or teaching assistant. Documented results are uploaded for assessment. Lab 1 (Fall, Spring).
Digital Systems I
This course introduces the student to the basic components and methodologies used in digital systems design. It is usually the student's first exposure to engineering design. The laboratory component consists of small design, implement, and debug projects. The complexity of these projects increases steadily throughout the term, starting with circuits of a few gates, until small systems containing several tens of gates and memory elements. syllabus include: Boolean algebra, synthesis and analysis of combinational logic circuits, arithmetic circuits, memory elements, synthesis and analysis of sequential logic circuits, finite state machines, and data transfers. (This course is restricted to MCEE-BS, EEEE-BS and ENGRX-UND students.) Lab 2 (Fall, Spring).
General Education – Mathematical Perspective A: Project-Based Calculus I
This is the first in a two-course sequence intended for students majoring in mathematics, science, or engineering. It emphasizes the understanding of concepts, and using them to solve physical problems. The course covers functions, limits, continuity, the derivative, rules of differentiation, applications of the derivative, Riemann sums, definite integrals, and indefinite integrals. (Prerequisite: A- or better in MATH-111 or A- or better in ((NMTH-260 or NMTH-272 or NMTH-275) and NMTH-220) or a math placement exam score greater than or equal to 70 or department permission to enroll in this class.) Lecture 6 (Fall, Spring, Summer).
General Education – Mathematical Perspective B: Project-Based Calculus II
This is the second in a two-course sequence intended for students majoring in mathematics, science, or engineering. It emphasizes the understanding of concepts, and using them to solve physical problems. The course covers techniques of integration including integration by parts, partial fractions, improper integrals, applications of integration, representing functions by infinite series, convergence and divergence of series, parametric curves, and polar coordinates. (Prerequisites: C- or better in (MATH-181 or MATH-173 or 1016-282) or (MATH-171 and MATH-180) or equivalent course(s).) Lecture 6 (Fall, Spring, Summer).
General Education – Scientific Principles Perspective: University Physics I
This is a course in calculus-based physics for science and engineering majors. syllabus include kinematics, planar motion, Newton's Laws, gravitation, work and energy, momentum and impulse, conservation laws, systems of particles, rotational motion, static equilibrium, mechanical oscillations and waves, and data presentation/analysis. The course is taught in a workshop format that integrates the material traditionally found in separate lecture and laboratory courses. (Prerequisites: C- or better in MATH-181 or equivalent course. Co-requisites: MATH-182 or equivalent course.) Lec/Lab 6 (Fall, Spring).
RIT 365: RIT Connections
RIT 365 students participate in experiential learning opportunities designed to launch them into their career at RIT, support them in making multiple and varied connections across the university, and immerse them in processes of competency development. Students will plan for and reflect on their first-year experiences, receive feedback, and develop a personal plan for future action in order to develop foundational self-awareness and recognize broad-based professional competencies. Lecture 1 (Fall, Spring).
General Education – Elective
General Education – First Year Writing (WI)
General Education – Artistic Perspective
General Education – Global Perspective
General Education – Social Perspective
Computational Problem Solving for Engineers
This course introduces computational problem solving. Basic problem-solving techniques and algorithm development through the process of top-down stepwise refinement and functional decomposition are introduced throughout the course. Classical numerical problems encountered in science and engineering are used to demonstrate the development of algorithms and their implementations. May not be taken for credit by Computer Science, Software Engineering, or Computer Engineering majors. This course is designed for Electrical Engineering and Micro-Electronic Engineering majors and students interested in the Electrical Engineering minor. (Prerequisites: (MATH-181 or MATH-181A or MATH-171) and (MCEE-BS or EEEE-BS or ENGRX-UND or EEEEDU-BS or ENGXDU-UND) or equivalent courses.) Lecture 3 (Fall, Spring).
Engineering Co-op Preparation
This course will prepare students, who are entering their second year of study, for both the job search and employment in the field of engineering. Students will learn strategies for conducting a successful job search, including the preparation of resumes and cover letters; behavioral interviewing techniques and effective use of social media in the application process. Professional and ethical responsibilities during the job search and for co-op and subsequent professional experiences will be discussed. (This course is restricted to students in Kate Gleason College of Engineering with at least 2nd year standing.) Lecture 1 (Fall, Spring).
Digital Systems II
In the first part, the course covers the design of digital systems using a hardware description language. In the second part, it covers the design of large digital systems using the computer design methodology, and culminates with the design of a reduced instruction set central processing unit, associated memory and input/output peripherals. The course focuses on the design, capture, simulation, and verification of major hardware components such as: the datapath, the control unit, the central processing unit, the system memory, and the I/O modules. The lab sessions enforce and complement the concepts and design principles exposed in the lecture through the use of CAD tools and emulation in a commercial FPGA. This course assumes a background in C programming. (Prerequisites: (EEEE-120 or 0306-341) and CMPR-271 or equivalent courses.) Lab 2 (Fall, Spring).
Introduction to Semiconductor Devices
An introductory course on the fundamentals of semiconductor physics and principles of operation of basic devices. syllabus include semiconductor fundamentals (crystal structure, statistical physics of carrier concentration, motion in crystals, energy band models, drift and diffusion currents) as well as the operation of pn junction diodes, bipolar junction transistors (BJT), metal-oxide-semiconductor (MOS) capacitors and MOS field-effect transistors. (Prerequisites: PHYS-212 or PHYS-208 and 209 or equivalent course.) Lecture 3 (Fall, Spring).
Circuits I
Covers basics of DC circuit analysis starting with the definition of voltage, current, resistance, power and energy. Linearity and superposition, together with Kirchhoff's laws, are applied to analysis of circuits having series, parallel and other combinations of circuit elements. Thevenin, Norton and maximum power transfer theorems are proved and applied. Circuits with ideal op-amps are introduced. Inductance and capacitance are introduced and the transient response of RL, RC and RLC circuits to step inputs is established. Practical aspects of the properties of passive devices and batteries are discussed, as are the characteristics of battery-powered circuitry. The laboratory component incorporates use of both computer and manually controlled instrumentation including power supplies, signal generators and oscilloscopes to reinforce concepts discussed in class as well as circuit design and simulation software. (Prerequisite: MATH-173 or MATH-182 or MATH-182A or equivalent course.) Lab 3 (Fall, Spring, Summer).
Circuits II
This course covers the fundamentals of AC circuit analysis starting with the study of sinusoidal steady-state solutions for circuits in the time domain. The complex plane is introduced along with the concepts of complex exponential functions, phasors, impedances and admittances. Nodal, loop and mesh methods of analysis as well as Thevenin and related theorems are applied to the complex plane. The concept of complex power is developed. The analysis of mutual induction as applied to coupled-coils. Linear, ideal and non-ideal transformers are introduced. Complex frequency analysis is introduced to enable discussion of transfer functions, frequency dependent behavior, Bode plots, resonance phenomenon and simple filter circuits. Two-port network theory is developed and applied to circuits and interconnections. (Prerequisites: C or better in EEEE-281 or equivalent course.) Lecture 3 (Fall, Spring, Summer).
Advanced Programming
This course teaches students to master C++ programming in solving engineering problems and introduces students to basic concepts of object-oriented programming. Advanced skills of applying pointers will be emphasized throughout the course so as to Boost the portability and efficiency of the programs. Advanced skills of preprocessors, generic functions, linked list, and the use of Standard Template Library will be developed. (Prerequisites: CMPR-271 or equivalent course.) Lecture 3 (Fall, Spring).
Multivariable and Vector Calculus
This course is principally a study of the calculus of functions of two or more variables, but also includes a study of vectors, vector-valued functions and their derivatives. The course covers limits, partial derivatives, multiple integrals, Stokes' Theorem, Green's Theorem, the Divergence Theorem, and applications in physics. Credit cannot be granted for both this course and MATH-219. (Prerequisite: C- or better MATH-173 or MATH-182 or MATH-182A or equivalent course.) Lecture 4 (Fall, Spring, Summer).
General Education – Elective: Differential Equations
This course is an introduction to the study of ordinary differential equations and their applications. syllabus include solutions to first order equations and linear second order equations, method of undetermined coefficients, variation of parameters, linear independence and the Wronskian, vibrating systems, and Laplace transforms. (Prerequisite: MATH-173 or MATH-182 or MATH-182A or equivalent course.) Lecture 3 (Fall, Spring, Summer).
General Education – Natural Science Inquiry Perspective: University Physics II
This course is a continuation of PHYS-211, University Physics I. syllabus include electrostatics, Gauss' law, electric field and potential, capacitance, resistance, DC circuits, magnetic field, Ampere's law, inductance, and geometrical and physical optics. The course is taught in a lecture/workshop format that integrates the material traditionally found in separate lecture and laboratory courses. (Prerequisites: (PHYS-211 or PHYS-211A or PHYS-206 or PHYS-216) or (MECE-102, MECE-103 and MECE-205) and (MATH-182 or MATH-172 or MATH-182A) or equivalent courses. Grades of C- or better are required in all prerequisite courses.) Lec/Lab 6 (Fall, Spring).
General Education – Ethical Perspective
Linear Systems
Linear Systems provides the foundations of continuous and discrete signal and system analysis and modeling. syllabus include a description of continuous linear systems via differential equations, a description of discrete systems via difference equations, input-output relationship of continuous and discrete linear systems, the continuous time convolution integral, the discrete time convolution sum, application of convolution principles to system response calculations, exponential and trigonometric forms of Fourier series and their properties, Fourier transforms including energy spectrum and energy spectral density. Sampling of continuous time signals and the sampling theorem, the Laplace, Z and DTFT. The solution of differential equations and circuit analysis problems using Laplace transforms, transfer functions of physical systems, block diagram algebra and transfer function realization is also covered. A comprehensive study of the z transform and its inverse, which includes system transfer function concepts, system frequency response and its interpretation, and the relationship of the z transform to the Fourier and Laplace transform is also covered. Finally, an introduction to the design of digital filters, which includes filter block diagrams for Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters is introduced. (Prerequisites: EEEE-282 and MATH-231 and CMPR-271 or equivalent course.) Lecture 4 (Fall, Spring).
EM Fields and Transmission Lines
The course provides the foundations of EM fields, static and time varying, and a study of propagation, reflection and transmissions of electromagnetic waves in unbounded regions and in transmission lines. syllabus include the following: electric field intensity and potential, Guass' Law, polarization, electric flux density, dielectric constant and boundary conditions, Poisson's and Laplace's equations, methods of images, steady electric current and conduction current density, vector magnetic potential, Biot-Savart law, magnetization, magnetic field intensity, permeability, boundary conditions, Faraday's law, Maxwell's equations and the continuity equation. Time harmonic EM fields, wave equations, uniform plane waves, polarization, Poynting theorem and power, reflection and transmission from multiple dielectric interfaces, transmission line equations, transients on transmission lines, pulse and step excitations, reflection diagrams, sinusoidal steady state solutions, standing waves, the Smith Chart and impedance matching techniques, TE and TM waves in rectangular waveguides. experiments using state-of-art RF equipment illustrating fundamental wave propagation and reflection concepts, design projects with state-of-art EM modeling tools. (Prerequisites: MATH-221 and MATH-231 and PHYS-212 or PHYS-208 and PHYS-209 or equivalent course.) Lab 3 (Fall, Spring).
Digital Electronics
This is an introductory course in digital MOS circuit analysis and design. The course covers the following topics: (1) MOSFET I-V behavior in aggressively scaled devices; (2) Static and dynamic characteristics of NMOS and CMOS inverters; (3) Combinational and sequential logic networks using CMOS technology; (4) Dynamic CMOS logic networks, including precharge-evaluate, domino and transmission gate circuits; (5) Special topics, including static and dynamic MOS memory, and interconnect RLC behavior. (Prerequisites: EEEE-281 or equivalent course.) Lab 3 (Fall, Spring, Summer).
Co-op (fall and summer)
One semester of paid work experience in electrical engineering. (This course is restricted to EEEE-BS Major students.) CO OP (Fall, Spring, Summer).
Complex Variables
This course covers the algebra of complex numbers, analytic functions, Cauchy-Riemann equations, complex integration, Cauchy's integral theorem and integral formulas, Taylor and Laurent series, residues, and the calculation of real-valued integrals by complex-variable methods. (Prerequisites: MATH-219 or MATH-221 or equivalent course.) Lecture 3 (Fall, Spring).
General Education – Immersion 1
Classical Control
This course introduces students to the study of linear continuous-time classical control systems, their behavior, design, and use in augmenting engineering system performance. The course is based on classical control methods using Laplace-transforms, block-diagrams, root-locus, and frequency-domain analysis. syllabus include: Laplace-transform review; Bode plot review; system modeling for control; relationships of transfer-function poles and zeros to time-response behaviors; stability analysis; steady-state error, error constants, and error specification; feedback control properties; relationships between stability margins and transient behavior; lead, lag, and PID control; root-locus analysis and design; frequency-response design and Nyquist stability. A laboratory will provide students with hands-on analysis and design-build-test experience, and includes the use of computer-aided design software such as MATLAB. (Prerequisites: EEEE-353 or equivalent course.) Lab 3 (Fall, Spring).
Embedded Systems Design
The purpose of this course is to expose students to both the hardware and the software components of a digital embedded system. It focuses on the boundary between hardware and software operations. The elements of microcomputer architecture are presented, including a detailed discussion of the memory, input-output, the central processing unit (CPU) and the busses over which they communicate. C and assembly language level programming concepts are introduced, with an emphasis on the manipulation of microcomputer system elements through software means. Efficient methods for designing and developing C and assembly language programs are presented. Concepts of program controlled input and output are studied in detail and reinforced with extensive hands-on lab exercises involving both software and hardware, hands-on experience. (Prerequisites: EEEE-220 or equivalent course.) Lab 3 (Fall, Spring).
Analog Electronics
This is an introductory course in analog electronic circuit analysis and design. The course covers the following topics: (1) Diode circuit DC and small-signal behavior, including rectifying as well as Zener-diode-based voltage regulation; (2) MOSFET current-voltage characteristics; (3) DC biasing of MOSFET circuits, including integrated-circuit current sources; (4) Small-signal analysis of single-transistor MOSFET amplifiers and differential amplifiers; (5) Multi-stage MOSFET amplifiers, such as cascade amplifiers, and operational amplifiers; (6) Frequency response of MOSFET-based single- and multi-stage amplifiers; (7) DC and small-signal analysis and design of bipolar junction transistor (BJT) devices and circuits; (8) Feedback and stability in MOSFET and BJT amplifiers. (Prerequisites: EEEE-281 and EEEE-282 and EEEE-499 or equivalent courses.) Lab 3 (Fall, Spring).
Communication Systems (WI-PR)
Introduction to Communication Systems provides the basics of the formation, transmission and reception of information over communication channels. Spectral density and correlation descriptions for deterministic and stationary random signals. Amplitude and angle modulation methods (e.g. AM and FM) for continuous signals. Carrier detection and synchronization. Phase-locked loop and its application. Introduction to digital communication. Binary ASK, FSK and PSK. Noise effects. Optimum detection: matched filters, maximum-likelihood reception. Computer simulation. (Prerequisites: EEEE-353 and (MATH-251 or 1016-345) or equivalent course.) Lab 3 (Fall, Spring).
Senior Design Project I
This is the first in a two-course sequence oriented to the solution of real-world engineering design problems. This is a capstone learning experience that integrates engineering theory, principles, and processes within a collaborative environment. Multidisciplinary student teams follow a systems engineering design process, which includes assessing customer needs, developing engineering specifications, generating and evaluating concepts, choosing an approach, developing the details of the design, and implementing the design to the extent feasible, for example by building and testing a prototype or implementing a chosen set of improvements to a process. This first course focuses primarily on defining the problem and developing the design, but may include elements of build/ implementation. The second course may include elements of design, but focuses on build/implementation and communicating information about the final design. (Prerequisites: EEEE-374 and EEEE-414 and EEEE-420 and EEEE-480 and two co-ops (EEEE-499).) Lecture 3 (Fall, Spring).
Co-op (summer)
One semester of paid work experience in electrical engineering. (This course is restricted to EEEE-BS Major students.) CO OP (Fall, Spring, Summer).
Random Signals and Noise
In this course the student is introduced to random variables and stochastic processes. syllabus covered are probability theory, conditional probability and Bayes theorem, discrete and continuous random variables, distribution and density functions, moments and characteristic functions, functions of one and several random variables, Gaussian random variables and the central limit theorem, estimation theory , random processes, stationarity and ergodicity, auto correlation, cross-correlation and power spectrum density, response of linear prediction, Wiener filtering, elements of detection, matched filters. (Prerequisites: This course is restricted to graduate students in the EEEE-MS, EEEE-BS/MS program.) Lecture 3 (Fall, Spring).
Engineering Analysis
The course trains students to utilize mathematical techniques from an engineering perspective, and provides essential background for success in graduate level studies. The course begins with a pertinent review of matrices, transformations, partitions, determinants and various techniques to solve linear equations. It then transitions to linear vector spaces, basis definitions, normed and inner vector spaces, orthogonality, eigenvalues/eigenvectors, diagonalization, state space solutions and optimization. Applications of linear algebra to engineering problems are examined throughout the course. syllabus include: Matrix algebra and elementary matrix operations, special matrices, determinants, matrix inversion, null and column spaces, linear vector spaces and subspaces, span, basis/change of basis, normed and inner vector spaces, projections, Gram-Schmidt/QR factorizations, eigenvalues and eigenvectors, matrix diagonalization, Jordan canonical forms, singular value decomposition, functions of matrices, matrix polynomials and Cayley-Hamilton theorem, state-space modeling, optimization techniques, least squares technique, total least squares, and numerical techniques. Electrical engineering applications will be discussed throughout the course. (Prerequisites: This course is restricted to graduate students in the EEEE-MS, EEEE-BS/MS program.) Lecture 3 (Fall, Spring).
Graduate Seminar
The objective of this course is to introduce full time Electrical Engineering BS/MS and incoming graduate students to the graduate programs, campus resources to support research. Presentations from faculty, upper division MS/PhD students, staff, and off campus speakers will provide a basis for student selection of research topics, comprehensive literature review, and modeling effective conduct and presentation of research. All first year graduate students enrolled full time are required to successfully complete two semesters of this seminar. Seminar 3 (Fall, Spring).
Probability and Statistics
This course introduces trial spaces and events, axioms of probability, counting techniques, conditional probability and independence, distributions of discrete and continuous random variables, joint distributions (discrete and continuous), the central limit theorem, descriptive statistics, interval estimation, and applications of probability and statistics to real-world problems. A statistical package such as Minitab or R is used for data analysis and statistical applications. (Prerequisites: MATH-173 or MATH-182 or MATH 182A or equivalent course.) Lecture 3 (Fall, Spring, Summer).
General Education – Immersion 2
Open Elective
Senior Design Project II
This is the first in a two-course sequence oriented to the solution of real-world engineering design problems. This is a capstone learning experience that integrates engineering theory, principles, and processes within a collaborative environment. Multidisciplinary student teams follow a systems engineering design process, which includes assessing customer needs, developing engineering specifications, generating and evaluating concepts, choosing an approach, developing the details of the design, and implementing the design to the extent feasible, for example by building and testing a prototype or implementing a chosen set of improvements to a process. This first course focuses primarily on defining the problem and developing the design, but may include elements of build/implementation. The second course may include elements of design, but focuses on build/implementation and communicating information about the final design. (Prerequisites: EEEE-497 or equivalent course.) Lecture 3 (Fall, Spring).
Advanced Engineering Mathematics
The course begins with a pertinent review of linear and nonlinear ordinary differential equations and Laplace transforms and their applications to solving engineering problems. It then continues with an in-depth study of vector calculus, complex analysis/integration, and partial differential equations; and their applications in analyzing and solving a variety of engineering problems especially in the areas of control, circuit analysis, communication, and signal/image processing. syllabus include: ordinary and partial differential equations, Laplace transforms, vector calculus, complex functions/analysis, complex integration, and numerical techniques. Electrical engineering applications will be discussed throughout the course. (This class is restricted to degree-seeking graduate students or those with permission from instructor.) Lecture 3 (Fall, Spring, Summer).
6
Thesis
An independent engineering project or research problem to demonstrate professional maturity. A formal written thesis and an oral defense are required. The student must obtain the approval of an appropriate faculty member to guide the thesis before registering for the thesis. A thesis may be used to earn a maximum of 6 credits. Thesis (Fall, Spring, Summer).
Graduate Paper plus 1 Graduate Elective
This course is used to fulfill the graduate paper requirement under the non-thesis option for the MS degree in electrical engineering. The student must obtain the approval of an appropriate faculty member to supervise the paper before registering for this course. Project (Fall, Spring, Summer).
Graduate Seminar
The objective of this course is to introduce full time Electrical Engineering BS/MS and incoming graduate students to the graduate programs, campus resources to support research. Presentations from faculty, upper division MS/PhD students, staff, and off campus speakers will provide a basis for student selection of research topics, comprehensive literature review, and modeling effective conduct and presentation of research. All first year graduate students enrolled full time are required to successfully complete two semesters of this seminar. Seminar 3 (Fall, Spring).
Open Elective
Professional Electives
Graduate Electives
General Education – Immersion 3
150
VANCOUVER, British Columbia., June 23, 2022 (GLOBE NEWSWIRE) -- Juggernaut Exploration Ltd (JUGR.V) (OTCQB: JUGRF) (FSE: 4JE) (the “Company” or “Juggernaut”) is pleased to report exploration has commenced in preparation for drilling onto the 100% controlled Bingo property. The Bingo property was generated by the J2 Syndicate within the Eskay Rift region of the Golden Triangle. The property is situated within two kilometres of the unconformity between Lower Hazelton and Stuhini Group rocks, also known as the “Red Line” boundary where the vast majority of large deposits have been found in the Golden Triangle. The property has an area of 989 hectares and is located 45 km SSW of Stewart, BC and 28 km W of Kitsault, and only 12 km to tidewater landing and roads in the historic mining town of Anyox, providing for cost effective exploration.
Bingo 2016 to 2018 Historical Highlights include:
The Bingo Main target is an original discovery that has never been drill tested. The zone contains significant historical gold mineralized grab, chip and channel samples over an area of 420 metres x 320 metres. The zone is open both on surface and to depth and is drill ready. Highlights include:
83% of all the samples taken contained gold mineralization
Historical channel cut over 4.85 metres assayed 1.77 gpt Au, and 0.20 % Cu Drill ready
Historical channel cut over 3.2 metres assayed 1.48 gpt Au and 0.37 % Cu Drill ready
Between 2016 and 2018, 19 chips samples were collected and assayed up to 9.79 gpt Au and 18 grab samples that returned assay values up to 1.22 gpt Au
2022 program includes ground and drone mapping, sampling, ground geophysics and BLEG geochemistry focussed on defining additional drill targets in preparation for drilling.
Exploration Up-Date:
Goldstar: 2022 planned drilling is to commence in early July expanding upon the 5 discovery holes drilled in 2021, all of which intersected significant widths of high-grade gold /polymetallic mineralization in quartz-chlorite-sulphide veins on the newly discovered Goldilocks Zone. Drill hole GS-21-05 intersected 10.795 gpt Au (14.31 AuEq) over 5.5 m including 29.2 gpt Au (38.37 AuEq) over 2.0 m. The Goldilocks Zone has been traced on surface for 290 meters with 160 meters vertical relief before being covered by overburden and remains open both along strike and to depth. With strong gold mineralization confirmed in all the drill holes strongly suggests the presence of a significant gold system that remains under-explored. The 2022 drilling will focus on testing the Goldilocks Zone on strike for up to ~300 meters and down dip up to 400 m.
Goldstandard: 2022 planned drilling is to commence in early July following up on drilling in 2021, Drill hole GSD-21-10 intersected 2.146 gpt Au (2.302 gpt AuEq) over 6.5 m including 3.284 gpt Au (3.498 gpt AuEq) over 4.0 m and 8.210 gpt Au (8.638 gpt AuEq) over 1 m. The Goldzilla Zone has been traced on surface for 800 m with a vertical relief of 300 m and remains open both to the southeast and to depth, only a small fraction has been drill tested (~50 meters along strike). Drilling is planned to test both down dip and strike of three of the seven extensive high grade polymetallic gold silver veins discovered on surface on the property. Goldzilla, Kraken and Phoenix, only a small portion of the Goldzilla vein has been tested to date which confirmed gold mineralization both along strike and to depth that remains open.
Midas: Surface exploration is planned to commence in August focused on sampling and mapping the discovery outcrop both along and across strike in preparation for drilling on the recently discovered Kokomo Eskay-style Volcanic Hosted Massive Sulphide (VHMS) target. The Kokomo outcrop contains high grade gold-silver polymetallic mineralization in semi-massive to massive sulphides where a 1 m chip trial assayed 9.343 gpt Au, 117 gpt Ag, 1.58 % Cu and 1.77 % Zn. This newly discovered outcrop is located 700 m in the headwaters of a drainage where a Bulk Leach Extractable Gold (BLEG) stream sediment trial assayed 29 ppb Au, 613 ppb Ag, 137 ppm Cu, 54.4 ppm Pb and 462 ppm Zn. The high-grade polymetallic gold silver mineralization remains open in all directions where outcrops of the same or similar lithology extend over several hundred meters
Statements
Dan Stuart, President and CEO of Juggernaut Exploration, states: “We are excited to have begun our most expansive and aggressive exploration season at Juggernaut to date. We are drilling and exploring four 100% controlled original discoveries this summer, all of which have the potential to become the next big gold discovery in BC. We look forward to providing news around drilling, exploration and results as the exploration program progresses this summer. Juggernaut is in a very unique position, with a tight share structure of just over 43MM shares issued and outstanding and $4,000,000 currently in the treasury including exploration rebates. Juggernaut is fully funded and on track for the rapidly approaching exploration programs for both 2022 and 2023.”
Qualified Person
Rein Turna P. Geo is the qualified person as defined by National Instrument 43-101, for Juggernaut Exploration projects, and supervised the preparation of, and has reviewed and approved, the technical information in this release.
Other
The reader is cautioned that grab samples are spot samples which are typically, but not exclusively, constrained to mineralization. Grab samples are selective in nature and collected to determine the presence or absence of mineralization and are not intended to be representative of the material sampled. In addition, the reader is cautioned that proximity to known mineralization does not guarantee similar mineralization will exist on the properties.
For more information, please contact:
Juggernaut Exploration Ltd.
Dan Stuart
President and Chief Executive Officer
Tel: (604)-559-8028
www.juggernautexploration.com
NEITHER THE TSX VENTURE EXCHANGE NOR ITS REGULATION SERVICES PROVIDER (AS THAT TERM IS DEFINED IN THE POLICIES OF THE TSX VENTURE EXCHANGE) ACCEPTS RESPONSIBILITY FOR THE ADEQUACY OR ACCURACY OF THIS RELEASE.
FORWARD LOOKING STATEMENT
Certain disclosure in this release may constitute forward-looking statements that are subject to numerous risks and uncertainties relating to Juggernaut’s operations that may cause future results to differ materially from those expressed or implied by those forward-looking statements, including its ability to complete the contemplated private placement. Readers are cautioned not to place undue reliance on these statements.
NOT FOR DISSEMINATION IN THE UNITED STATES OR TO U.S. PERSONS OR FOR DISTRIBUTION TO U.S. NEWSWIRE SERVICES. THIS PRESS RELEASE DOES NOT CONSTITUTE AN OFFER TO SELL OR AN INVITATION TO PURCHASE ANY SECURITIES DESCRIBED IN IT.