Avery Martin holds a Bachelor of Music in opera performance and a Bachelor of Arts in East Asian studies. As a professional writer, she has written for Education.com, Samsung and IBM. Martin contributed English translations for a collection of Japanese poems by Misuzu Kaneko. She has worked as an educator in Japan, and she runs a private voice studio out of her home. She writes about education, music and travel.
Schools find that lifecycle management software lets them do more with less.
The I.T. staff at the Jurupa Unified School District in Riverside, Calif., knew they had to find a better way to manage the district’s 4,500 computers.
With a small staff of five technicians, it was simply impossible for the IT staff to make personal visits to the district’s 16 elementary schools, three middle schools and three high schools and keep up with all the patches and software updates.
{mosloadposition mpu}
“We’re spread out over 44 square miles, and with the state’s budget situation the way it is, hiring additional IT workers was not possible,” says Thomas Tan, Jurupa’s director of information and education technology. “We needed a smarter way to manage the network,” he explains.
Jurupa’s answer was the Altiris Client Management Suite from Symantec, lifecycle management software that helps the district remotely manage PC and network assets, software distribution, and configuration and patch management.
Unauthorized changes account for roughly 60 percent of system downtime.
Source: Enterprise Management Associates
In the past, Tan says, it would take the IT staff 30 minutes to two hours to deploy a new PC image, which is a new version of an operating system with the appropriate configurations, drivers and applications. Now the staff can deploy new images in minutes.
“We could not re-image every computer manually on location,” adds Bob Ford, the district’s network manager. “It takes us 20 minutes to get to the most distant high school, so by managing the images remotely from a central location we save time and fuel costs,” he explains.
Tan says the ROI case for Altiris is very strong: Including two weeks of training and consulting, Altiris cost the district slightly less than $100,000 to deploy.
“A full-time tech would cost us $60,000 a year, so Altiris pays for itself in less than two years,” says Tan.
Many school districts and organizations find that lifecycle management tools that let IT managers take a full view of the network are preferable to point solutions. Products such as Avocent’s LANDesk, Kace Networks’ KBOX, Novell’s ZENworks and Symantec’s Altiris are among the leading players. Prices vary based on the scope of a project, but most of the products cost in the five- to six-figure range for a 1,000-node deployment.
“We’ve gone from a silo-based view to a more holistic view of the network,” says Andi Mann, vice president, Enterprise Management Associates (EMA).
“As we develop new technologies, they tend to be integrated into the lifecycle management tools,” he says. “For example, we’re starting to see virtual applications management offered in many of the latest products.”
School districts struggling with tight budgets say deploying lifecycle management tools is the only way they can survive these challenging economic times. Here are some best practices they offer:
Deploy the tool as soon as possible.
Mike Roberts, technology director at Quinlan Independent School District in Quinlan, Texas, recommends not worrying about mastering all the features right away. Roberts, who deployed a KBOX appliance for his small six-school district, says IT managers are going to learn more about their networks than they ever imagined, so one approach is to just roll it out and use the training sessions to fill in the gaps.
“Very quickly, you’ll find out about license issues you didn’t know about and what’s actually running on each computer on the network,” Roberts says.
EMA’s Mann agrees that IT managers will derive immediate benefits from lifecycle management software. “For some, patch management may be the biggest problem to tackle first, while others may just start right off with inventory and asset control,” he says.
Conduct a thorough inventory of your software and licenses.
For those who are more comfortable with a formal plan, one of the best ways to start is to get control of your software licenses. It’s important to keep in mind that if an end user introduces pirated software onto the network, you could be held liable.
“Set up a baseline and run a variance report,” says EMA’s Mann. “Do it every week until you start seeing patterns as to what’s installed on your network.”
Karen Diggs, director of technology at North West Hendricks Schools in Lizton, Ind., says Novell’s ZENWorks 10 makes it very easy to get information on all the existing software licenses and the district’s hardware inventory.
“There are so many teachers using different software programs that it’s good to have ZENWorks keep track of all the software so we know what’s out there,” Diggs says.
Get your network backbone in order.
Especially for school districts that plan to use the software distribution feature, it’s important to have a robust network with the available bandwidth to handle remote trouble-shooting and software installs. Jurupa’s Ford says the district’s Altiris deployment was roughly in tandem with a gigabit network rollout.
“We now have 600 times more capacity than our existing T1 lines,” Ford says. “In a previous life, we would bring the Altiris servers to the site. Now we can do all the management from a centralized location and have LAN-quality speeds throughout the network,” he explains.
Minimize variations and lock down user desktops.
EMA’s Mann advises IT managers to stay away from multiple images. The fewer the images, the less complex the network is and the fewer procedures the IT staff will have to run and manage, he says.
Along with minimal images, it’s also important to lock down user machines. This means preventing users from installing new programs, using external hard drives and accessing the control panel.
“By preventing user activities, you minimize changes, and there’s less of a chance the network will experience an unauthorized change,” he explains.
Set up a test lab.
Quinlan’s Roberts warns that remote software distribution can become a nightmare if it’s not managed properly. He recommends setting up a test lab to run remote software installs.
“The last thing you want to do is blast an install through your management system of software that doesn’t work,” Roberts says.
Diggs of North West Hendricks Schools is adamant that the best approach is to set up a test environment, configure based on the school’s specifications, take a snapshot and then run the test.
“Once you roll out the software, you don’t want to uninstall and re-image the machines,” she says. “Even with the automated tools, it’s still time consuming.”
Ask these questions to determine if lifecycle management software is right for your school district:
A top-down approach in business describes a traditional organizational style that emphasizes the imperatives and vision of upper management. Company directives and goals flow down from the top to subordinates below. Most small businesses automatically use the top-down approach because they’re apt to have only two layers: owner and employees.
Bitwarden, the maker of the popular open-source password manager tool, has released ‘Secrets Manager,’ an end-to-end encrypted secrets manager for IT professionals, software development teams, and the DevOps industry.
The tool aims to act as a secure alternative to hard-coding secrets or sharing ‘.env’ files over email, giving users flexibility, scalability, and keeping their secrets safe in the case of a data breach.
Those secrets typically include API keys, user authentication certificates, database passwords, SSL and TLS certificates, private encryption keys, SSH keys, etc.
These secrets are inadvertently exposed online following cyberattacks or publicly leaked due to poor security practices in the development lifecycle.
Last year, Symantec reported that over 1,800 apps for the iOS platform contained hard-coded AWS credentials, exposing their developers and users to varying risk levels.
The problem is so widespread that GitHub launched a system that would alert repository owners of misconfigurations leading to the exposure of secrets, and independent security researchers wrote open-source tools dedicated to scanning for secrets in publicly exposed AWS S3 storage buckets.
Bitwarden Secrets Manager is poised to solve this problem by giving users an easy and secure way to retrieve, share, and deploy them across development teams while also supporting granular access permissions for individuals or groups.
Secrets Manager follows the same open-source approach as the password manager, so its codebase, CLI, SDK, and integration code are subject to scrutiny and also allow the flexibility of custom implementations.
The tool is offered in three tiers, depending on the needs of development teams, but there’s a free version supporting unlimited secrets, two users, three projects, and three service accounts.
The ‘Teams’ and ‘Enterprise’ tiers that cost $6 and $12 per month, respectively, raise those limits and offer additional business functionalities like support for FIDO2 authentication, automated provisioning, SSO integration, and more advanced administrative capabilities.
For now, Bitwarden Secrets Manager supports integration with GitHub Actions, but support for Kubernetes, Terraform, and Ansible integrations is expected to land in future versions.
Also, more languages are to be added to the tool’s SDK, and access management will be enhanced with additional options for individual secret assignments to specific accounts.
Managing vendor contracts, controlling hardware and software costs and optimizing IT assets to meet business requirements constitute critical chores for IT professionals. Symantec’s Altiris Asset Management Suite 7.1 aims to remove the hassle from IT asset management by giving enterprises the detailed information they need to make smart, informed decisions. Such tools are a necessity in today’s cost-conscious workplace.
Altiris Asset Management Suite (AMS) culls data from Symantec’s Client Management Suite (CMS) and Server Management Suite (SMS). AMS also integrates with similar Microsoft discovery tools so IT departments can tap installed investments.
IT departments often find it tough to tease out relationships between hardware, software, associated contracts, end users and user groups. Altiris AMS takes away the pain of guessing who has what system, who has what installed on their system and when their licenses are due for renewal.
The downloadable suite provides a wizard that assesses whether a system meets the minimum product requirements and will add any missing applications if prompted — a cool feature that saves the administrator time during installation. After ensuring that my hardware met the minimum product requirements prior to the installation, AMS downloaded successfully. The installation and initial setup were painless.
AMS’s user interface logically divides hardware and software. An application metering capability provides insight into which applications have been installed, which have been paid for and which are being used. Such information makes this a real cost-cutting tool for IT.
IT managers can also see the full cradle-to-grave lifecycle of an asset, including contracts of all types associated with hardware and software, purchase orders, service-level agreements, warranties and even retirement and disposal documentation.
Administrators can calculate total cost of ownership by factoring in discoverable data such as purchase costs, monthly maintenance fees or chargeback costs. It’s possible to customize AMS to include fields specific to a school district and also add non-discoverable information to an asset, such as an additional cost center.
It’s also possible to designate who may view asset information by groups, which proves useful for security. For example, IT staff can limit asset visibility of a satellite campus to authorized people at that facility.
AMS is optimized for and depends heavily on its associated Symantec discovery tools, CMS and SMS. Figuring out these dependencies may take a bit of time and some experimentation. According to Symantec, most deployments consist of AMS coupled with CMS.
Altiris Asset Management Suite requires the Symantec Management Platform, which includes the Symantec Management Console, Database, Notification Server and Asset Management Suite components. The Management Server must be installed with .NET Framework 3.5 SP1 or above, Internet Explorer 7.0 or above, SQL Server 2005 or SQL Server 2008 and Windows Server 2008 R2 x64. The Workflow Server needs either Windows Server 2003 or 2008, SQL Server 2005, Windows Server 2008 R2, Windows IIS and Microsoft .NET Framework 3.5.
The convergence of information technology (IT) with operational technology (OT) and engineering technology (ET) is a crucial enabler for digital transformation in companies, particularly asset-intensive industries such as mining and manufacturing. We can see this in the partnership between AFRY, a leader in engineering design and advisory services, and Infosys, a leader in next-generation digital services and consulting.
This article focuses on AFRY’s process industry business and how the two companies partnered to deliver an IT-OT-ET integrated "single source of truth," assuring data integrity from the time of initial engineering and construction and across all the plant lifecycle stages, speeding the ability to ramp up to design capacity, eliminate delays due to engineering rework and costly design fixes, reduce unplanned downtime and Improve overall plant performance and productivity.
AFRY is a trailblazer in a domain that has traditionally been slow in fully embracing the latest technological advances. As Kai Vikman, COO at AFRY, noted, "Successful IT-OT-ET integration is a clear prerequisite to reap the benefits of digital manufacturing at scale." He also believes that this will be an obligation with the new European Data Act calling for more harmonized rules on fair access to and use of data.
Kai Vikman- chief operating officer at AFRY
AFRYGetting started: The handover from construction to operations
The life span of a process plant in industries such as industrial chemical manufacturing is typically more than 50 years. Building such a plant is a complex multistep process, and its success will rely heavily on effective collaboration among all stakeholders covering multiple disciplines from process engineering to mechanical engineering to architecture to electrical and instrumentation to piping and construction.
After the plant is complete, there is a handover of information from the builder to the plant operator. The handover may involve millions of documents from multiple engineering, procurement and construction (EPC) contractors. Transferring relevant data in a format usable by the plant’s operations and maintenance is a challenge and a potential inhibiter that could add months or years to the schedule for making the plant fully operational.
The data involved in this process spans multiple disciplines. It might include the standard technical specifications, process and instrumentation and process flow diagrams, architectural designs and schematics, electrical circuit diagrams, instrumentation details or a 3-D model of the plan. Each of these elements adds to the complexity.
Leveraging global standards for data sharing and integration
IT-OT-ET integration plays a central role as a critical facilitator for many other systems and information integration. The key to success is information standardization, ensuring minimum effort to hand over information between parties. Infosys worked with AFRY to establish the standard guiding principles and class libraries from multiple industry standards and best practices, as no single standard could address the data integration challenges across the lifecycle. The approach uses ISO 15926 (“Integration of lifecycle data for process plants, including oil and gas production facilities”), a globally recognized standard for data sharing and integrating complex plant and project information.
ISO 15926’s Resource Description Framework (RDF) acts as a universal reference across disparate information systems, providing a neutral information layer with which any software application with an ISO 15926 adaptor can exchange data. It preserves the precise meaning of the data as it is being exchanged by referencing a data dictionary containing definitions of all objects and associated attributes within the plant. This ability for systems to exchange information with shared meaning by using universal standards is called semantic interoperability.
In a semantic implementation, data arrives pre-packaged with self-described context, and the receiving system can derive meaning from that data through a universal vocabulary. In this case, Infosys added data about the data (i.e., metadata) and linked each element to a controlled, shared vocabulary defined by ISO 15926.
Other standards leveraged were the Capital Facilities Information Handover Specification (CFIHOS) and the DEXPI Initiative, promoting general data exchange standards for the process industry, with a current focus on Piping and Instrumentation diagrams. Infosys also used the OPC Unified Architecture (OPC UA) standard for operational technology integration for machine-to-machine communications for industrial automation.
Together with Infosys, AFRY has set up a sandbox environment integrating Virtual Site, a plant engineering system, SAP, the enterprise business planning system, and the Simatic platform, a plant automation system, to demonstrate new use cases. The structured data is implemented in an application server that binds the semantics to data based on the chosen standards to retrieve information in subsequent applications efficiently. The environment is currently set up on the Microsoft Azure platform but can be implemented on any on-premise or public cloud platforms. The unique contribution of the AFRY-Infosys partnership is the standardization and harmonization of data using the interoperability layer aligning to global standards.
Overall benefits of a single integrated source of truth
By integrating plant lifecycle data across the IT, OT and ET domains, Infosys and AFRY were able to build a single source of truth across the plant lifecycle—a digital twin of the entire plant. The digital twin is an exact digital representation of the physical plant and accurately reflects the state of the plant, including all of the information about work processes for operations and maintenance and engineering information.
Sharing integrated plant engineering data in the correct format between EPC companies and the plant operator reduced delays, rework, conflicts and change orders during the construction phase. Multidisciplinary engineering data simplified conformance to regulatory, environmental, safety and compliance standards.
For operations, a single source of information available at the right time, place and format led to significant improvements in long-term lifecycle performance and optimization, maximizing plant yield and efficiency. Safety information management with standardized processes, augmented by safe working training, led to fewer safety accidents and less lost time due to injury.
Effective maintenance management reduced unplanned downtime and a significant reduction in maintenance costs thanks to well-organized maintenance data and procedures, easy-to-find technical data sheets and ready access to spare parts. Deploying engineering data management as a shared data source to support digital solutions such as predictive maintenance resulted in improved productivity per technician and reductions in mean time-to-repair.
Wrapping up
The challenges that AFRY is tackling are in a domain that has been hesitant and slow to embrace the latest technological advances fully. The result has been fragmentation, inadequate collaboration with suppliers and insufficient knowledge transfer information from project to project. For the longest time, plant engineering data has resided in silos.
When a problem occurs in the plant, it is hard for engineers, operations and maintenance people to access information and identify the cause. When changes occur, it takes way too long to update the other systems that need to know about the change. The result is that the systems people rely on don't have accurate or sufficient data. The industry needs a radical approach. If digitalization is the primary goal, interoperability is the means to achieve it, and interoperability requires standardization.
Transactional and business process information (from IT), the monitoring and analysis of industrial assets (OT) and the use of engineering design data (ET) are all essential for the proper day-to-day function of a process plant. The incremental value of the AFRY-Infosys partnership comes from creating interoperability among these domains when the IT-OT-ET data is brought together in a single source of truth as the foundation for a digital enterprise.
Moor Insights & Strategy provides or has provided paid (wish services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Adobe, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Analog Devices, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Avaya Holdings, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Elastic, Ericsson, Extreme Networks, Five9, Flex, Fortinet, Foundries.io, Foxconn, Frame (now VMware), Frore Systems, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Intuit, Iron Mountain, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, MemryX, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Movandi, Multefire Alliance, National Instruments, Neat, NetApp, Netskope, Nightwatch, NOKIA, Nortek, Novumind, NTT, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Rigetti Computing, Ring Central, Salseforce.com, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Veeam, Ventana Micro Systems, Vidyo, Volumez, VMware, Wave Computing, Wells Fargo, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler.
The Vanguard Group is warning the Financial Stability Oversight Council that applying the same, one-size-fits-all risk management rules to all types of financial services companies could backfire, by increasing the odds that different types of companies will crash at the same time.
Vanguard talks about the dangers of promoting regulatory “groupthink” in a letter it sent to FSOC last week, in a response to FSOC efforts to regain the ability to take over specific potentially risky nonbank financial services companies quickly.
Vanguard says FSOC seems to be moving toward replacing the different sets of solvency rules that regulators have developed for insurers, housing finance providers and other nonbank financial firms with one set of rules based on the Federal Reserve regulations for banks
That could “lead to increased correlation of risk management practices,” Vanguard says. “Relying on a single, even if Federal Reserve-approved, risk management approach may increase the likelihood of herding behavior. This is a suboptimal way to mitigate macroprudential risk.”
Vanguard and its competitors have helped you convince your clients that diversification is a good approach to retirement planning.
Now, they’re trying to sell FSOC on the idea that diversification might also help with financial system risk management.
Congress put the statutory language creating FSOC in the Dodd-Frank Act, in an effort to keep the kind of complicated, previously obscure financial system problems that nearly crashed the world financial system in 2008 from cropping up in the future.
U.S. Treasury Secretary Janet Yellen is the chair of FSOC.
FSOC also includes heads of agencies such as the Federal Reserve Board and the U.S. Securities and Exchange Commission, a voting member with insurance expertise, the head of the Treasury Department’s Federal Insurance Office, and a representative from the National Association of Insurance Commissioners.
Federal bank regulators already had the goal to swoop in and manage risk when a bank seemed to be likely to fail. One of FSOC’s goals was to find a way to identify nonbanks as “systemically important financial institutions” (or SIFIs) and to give the Federal Reserve Board the ability to apply some of the same discipline they applied to banks to nonbank SIFIs.
FSOC began by taking an aggressive approach to identifying SIFIs. The SIFIs quickly escaped SIFI designations by taking steps such as restructuring their operations or going to court.
In 2019, FSOC agreed to back away from aggressive SIFI designation efforts; defer to nonbanks’ primary regulators, when possible; and to emphasize the regulation of potentially risky activities rather than oversight over specific companies.
Originally, FSOC was going to set a June 27 deadline for comments. It then responded to commenters’ requests for more time by pushing the deadline back to July 28.
Successful dissolution of gallstones by the oral administration of bile-acid mixtures was reported almost 70 years ago.[22] It was, however, only in the 1970s that this form of therapy was tested on a larger scale.[23,24,25] Initially, CDCA was used,[26] but due to a dose-dependent increase in aminotransferases, an increase in serum low-density lipoprotein cholesterol, and the development of bile salt-induced diarrhea, the treatment raised concerns. Because the more hydrophilic UDCA appeared to be as effective in gallstone dissolution but was practically devoid of side effects, it rapidly replaced the use of CDCA.[27,28]
The idea behind oral administration of CDCA and UDCA was to enrich the bile with these bile acids and thereby decrease cholesterol supersaturation and dissolve the stones. In fact, total bile salt concentration in bile did not change appreciably, and the decrease in cholesterol saturation was achieved primarily by a decrease in biliary cholesterol concentration. While both bile acids do decrease biliary cholesterol secretion,[29] they do so by different mechanisms. CDCA decreases cholesterol synthesis by inhibiting hepatic HMG-CoA reductase activity, whereas UDCA does not affect cholesterol synthesis but reduces intestinal cholesterol absorption.[30] CDCA also decreases hepatic bile-acid synthesis, but UDCA does not, and may even slightly increase it. There is also a difference between the 2 agents in terms of the physical chemical mechanism of gallstone dissolution: CDCA removes cholesterol from the stones by micellar solubilization, whereas UDCA does so primarily by formation of a liquid crystalline phase.[31,32]
In 1981, the National Cooperative Gallstone Study established the efficacy and safety of CDCA therapy.[26] Although gallstones could be dissolved by oral administration of CDCA, its efficacy was low. Less than half (40.8%) of patients responded to the highest dose tested (750 mg/day), and only 13.5% had complete dissolution of their stones within 2 years. Moreover, the response was slow. In over half, more than 9 months of treatment were needed for complete dissolution. Subsequently, the introduction of UDCA with a better safety profile and equal or better efficacy made bile salt litholysis more attractive.
The efficacy of CDCA is dose-dependent, but so are its side effects. Hence, a full dose of 15 mg/kg/day will induce diarrhea in up to 60% of patients, increase cholesterol levels in most patients, and cause hepatotoxicity in over 3%. In comparison, the recommended dose of UDCA (10-12 mg/kg/day) has essentially no side effects except occasional diarrhea. Therefore, monotherapy with CDCA cannot be recommended and has been completely replaced by UDCA therapy. Combination therapy with a reduced dose of both bile acids (5-8 mg/kg/day of each) has also been suggested, and may be as safe and efficient as full-dose UDCA monotherapy, as well as less costly.[33,34] UDCA monotherapy does, however, cause less diarrhea, and therefore it remains the treatment of choice today.
UDCA is usually given at a dose ranging between 8 and 15 mg/kg/day. Bedtime administration is preferable because it maintains hepatic bile-acid secretion rate overnight, thus reducing secretion of supersaturated bile and increasing the dissolution rate.[35,36] Dissolution is assessed by ultrasonography every 6 months. The expected dissolution rate is approximately a 1-mm decrease in stone diameter per month of treatment.[37] Treatment is usually continued for another 3 months after successful dissolution.
In up to 10% of patients, cholesterol gallstones acquire a surface calcification during treatment, rendering them nondissolvable and unsuitable for further therapy with bile acids.[38]
Not all patients are suitable candidates for oral dissolution therapy. Selection criteria are based on 3 main aspects: (1) patient, (2) gallbladder, and (3) stone characteristics. Patients with complications or with frequent and severe attacks of biliary colic are not suitable candidates. Patients with mildly symptomatic gallstones are the best candidates.[28,39] Patients with increased surgical risks or those who do not want to undergo surgery due to personal preferences should be considered for medical dissolution therapy. Asymptomatic patients are currently not treated. For medical therapy to be effective, the gallbladder needs to fill and function. Finally, only cholesterol stones can be dissolved by bile acids, and any significant calcification of the stones will render them nondissolvable.
Gallbladder function -- as well as cholesterol content of stones -- can be assessed by oral cholecystography.[40] After oral intake of an iopanoic acid derivative, a plain abdominal x-ray will show radiolucent cholesterol stones floating within a radiopaque contrast-filled gallbladder.[41] Gallbladder function can be further evaluated by measuring the emptying or ejection fraction following a fatty meal. Ultrasonography is the easiest and most precise method for detecting the presence of stones. Ultrasonography as well as cholescintigraphy may also be used to assess cystic duct patency and gallbladder function by measuring the ejection fraction after a fatty meal or cholecystokinin injection.[42,43,44] Some clinicians have even suggested that ultrasonography may predict stone composition prior to bile-acid or shock-wave lithotripsy treatment.[45,46] Several investigators have shown that the degree of stone calcification and suitability for bile-acid dissolution therapy can be accurately assessed by computed tomography (CT).[47,48,49] Hence, a combination of CT for stone composition and ultrasonography for gallbladder filling and function is also a good alternative for appropriate patient selection.
The success of oral dissolution treatment is defined as complete disappearance of gallstones as documented by oral cholecystography or, preferably, ultrasonography. This is achieved in 10% to over 80% of patients. The wide range of success reflects differences in patient selection, treatment duration, dosage, and ways of assessing success.[50] In a meta-analysis comprising almost 2000 patients treated until 1992, complete dissolution was achieved in 18.2% with CDCA, in 37.3% with UDCA, and in 62.8% with combination therapy.[51] In patients with small stones (< 10 mm), a dissolution rate of 48.5% was seen with UDCA therapy.
By employing more strict selection criteria, the efficacy of this treatment can be increased, but at the expense of the number of suitable candidates.[52] Thus, an optimal lean patient with small (< 5 mm) radiolucent stones (approximately 3% of all symptomatic patients) will have a 90% likelihood of complete dissolution within 6 months.[53] In contrast, patients with 5- to 10-mm radiolucent stones (approximately 12%) will have only a 50% chance of successful dissolution within 9 months.
Initially, extracorporeal shock-wave lithotripsy (ESWL) was introduced as an adjunct to bile-acid therapy.[54,55] The rationale was to use ESWL to fragment larger stones to increase dissolvable surface area, shorten treatment time, and increase the pool of patients suitable for bile-acid dissolution. With increasing experience it became clear that ESWL was actually an independent treatment modality.[56] After pulverizing gallstones to tiny sand-like fragments, there seems to be little if any benefit of or need for additional bile-acid therapy.[56,57,58]
A significant drawback of gallstone dissolution therapy is the possibility of gallstone recurrence. Stones will recur because the gallbladder is left in place and the underlying cause of gallstone formation has not been corrected. The recurrence rate is about 10% annually for up to 5 years,[59] and is often preceded by sludge formation.[60] Thereafter, recurrence is uncommon. Most stones recur without symptoms[50] and will respond to re-treatment with bile acids.[61,62] Maintenance therapy with low-dose UDCA has been reported to decrease the recurrence rate but it is costly.[63] Patients with multiple primary stones have an increased recurrence rate.[63] Additional factors that have been reported to predict recurrence after successful lithotripsy are obesity,[64] poor gallbladder emptying,[65] an increased deoxycholic acid pool,[66] and an apoE4 genotype.[67] Whether these factors are important after medical dissolution is unclear.
Because successful dissolution therapy is not inevitably followed by gallstone recurrence, there is a group of patients in whom the initial lithogenic process is transient. Pregnancy, rapid weight loss, and convalescence from abdominal surgery are recognized transient risk factors.[68,69] Trying to identify and characterize patients with transient lithogenicity for dissolution therapy is an important challenge for future studies.
As companies move to “blended” BPR, which seeks both long- and short-term benefits, they realize the need to involve different people and think differently about these projects than before.
“People are always saying that IT needs to get closer to the business,” says Jerry Luftman, executive director and distinguished professor at the Stevens Institute of Technology “But business needs to get closer to IT too. For years, MBA programs and executive training programs have focused on the wrong things, such as the technical elements that turn people off. Businesspeople don’t need to know how to write software. They need to understand governance, the strategic operational point of view, how to demonstrate value, and what their role is in a major IT initiative.”
Lisa Anderson, head of LMA Consulting Group, Inc., a firm that works on supply chain and inventory projects, says the most successful reengineering projects involve progressive IT leaders who partner with business units. “You need to find people in the IT departments who have strong business acumen,” she says. “You need people who will sit down and explain, in non-technical terms, how they can leverage new technologies like business intelligence to Improve inventory levels, supply chains and other processes.”
This type of business-first partnering has become more commonplace during the recession. The evolving nature of BPR has also increased the need for speed. “The time frame for most new projects now is yesterday,” jokes Ron Wince, CEO of Guidon Performance Solutions, a business process consulting firm, who adds that there’s a heightened focus on change management.
“Change management has always been an afterthought,” Wince says. “Even when companies did think of it, they didn’t really ingrain change management into the decision-making process as they do now.”
Companies are incorporating change management into the business case for BPR projects. In fact, Wince recently worked on a project where the company had an executive coach as part of the decision-planning process, with the role of helping executives change their behavior to be in line with the new processes.
While having a coach participate is quite unusual, more stakeholders, such as HR departments, are coming to the table early on in BPR projects, since the pace of projects demand that their different units provide input in a parallel rather than sequential manner.
The danger is when change management is written into the business case, but a company still maintains old habits from past projects when the pace was more leisurely. For example, one hospital, which had been rolling out electronic medical records (EMR) for a long time, had always built into its project plans that departments would have two months to work out the kinks with new technology. However, the new pace of BPR didn’t allow for that luxury, and the ROI goals required the new processes be running efficiently within two weeks. Immediately, business units pushed back about how quickly they were supposed to change the way they worked.
Moving forward, the hospital implemented an “internal readiness team,” which worked with employees and hired an external HR consultant to do surveys of their concerns and opinions. Because of this forethought, the hospital was able to identify and address a lot of issues with adoption well ahead of time, reducing the time it took to implement the new project by 15 percent.
“Companies don’t have the luxury of only involving the C-level and financial people in process reengineering,” Wince says. “Everyone is feeling the pressure to be ready for the growth that is coming, and that means they have to look at their processes, and how they manage their processes, in a different way.”