Back to Blog
In a traditional Life Sciences facility, anything that touches or even comes close to touching the product typically gets the full validation workup. Once validation is completed, the system is typically placed under tight change control to ensure no system modifications jeopardize efficacy or safety. The industry has been moving further and further to what is referred to as a risk based approach.
The simplistic interpretation is looking at the highest risk areas and focusing a substantial amount of effort to ensure they are correct and tightly controlled. Lower risk areas involve spending an adequate amount of time making sure things function properly without going overboard. Unfortunately, many organizations spend the same amount of time and money expended on indirect impact systems and parameters as on direct impact systems and associated parameters. Take one real world example: the lowly keyboard and mouse.
How many times have you seen someone document in a design spec that the operator station has a 104 key PS/2 keyboard and 2 button PS/2 mouse? Say the mouse broke, and you purchased one with a track wheel in addition to the two buttons, and it is USB instead of PS/2. Oh the horror! Now we have a deviation that could take the same amount of paperwork during IQ as finding a critical alarm that was not enabled properly. Does this sound logical? Or even worse, after the system is qualified, the mouse breaks and you cannot locate a 2 button PS/2 mouse. Now you have a change control, a design spec update, a qualification test, etc. Let's take this approach and apply it to environmental conditioning and monitoring.
A typical setup includes an air handling unit with ductwork distributing filtered, conditioned air to clean spaces. We control the discharge temperature, pressure, and possibly the humidity depending on the application. The next stop is the rooms. But before you get to the rooms, you will go through a VAV or simple coil to fine tune the temperature in the room. Finally, in the room we will monitor humidity, temperature, and differential pressure relative to adjacent spaces. Now consider what measurements are actually critical to ensuring the product will meet all quality specs. Does the discharge pressure of the air handler matter? If your room to room DP's are ok then what does the pressure matter? What about the discharge temperature? If the temperature in the room is ok, is the temperature coming out of the air handler critical? Don't get me wrong, you can waste a lot of energy or make a space very hard to maintain if you don't control at appropriate values at the unit itself, but those values are truly incidental to the actual conditions in the room.
What if we took an approach where we only qualified the space monitoring in the rooms themselves and treated the balance of the system as a somewhat less qualified system, perhaps just commissioned? Let's take this one step further and have a completely separate PLC or entirely separate software stack for monitoring space conditions. Now we can very easily draw the line between qualified and commissioned.
There are two real costs to calling something a qualified system: First is the initial purchase cost which encompasses engineering, documentation, installation, and testing. Then, there is the hidden cost of the additional change control burden. How much flexibility would you have around improving energy usage or general system efficiency if we could make modifications to the air handler without the added burden of change control for a qualified system? We've seen a number of customers implement an approach similar to this where they validate space condition monitoring and just commission the air handler unit itself. Unfortunately, everything is typically on the same controller, so when it comes change control time, it can be a difficult fight to convince the review board that you want to change the code in the controller, but it won't affect the environmental monitoring.
Do you have a new facility under construction? Are you working on up-fitting a new suite? Maybe you want to upgrade your air handler controls but the anticipated validation cost and schedule is a deal breaker. Let's talk to see if we can find some new solutions to your age old problems.
Back to Blog
Terminal Services has long been the domain of the IT geeks. Many have also considered the technology to be synonymous with poor performance and terrible displays; neither of which is desirable on the manufacturing floor. If you haven't looked at terminal services lately, you are in for quite a surprise.
First and foremost, the capabilities of today's modern thin client hardware are remarkable. Running multiple screens at 1920 x 1200 is now a standard feature for many units. Rendering in true color is also a standard and not an exception. And, with the powerful and varied software options available to the terminal services user, the real question now is not if terminal services can be a value add on your shop floor, but how. Let's explore some of the major features of the modern terminal server environment that can benefit your engineering as well as operations group.
One of the primary value propositions of a thin client is the small, low power, low heat form factor of many industrial thin clients. You wouldn't think of locating a full PC inside a small enclosure due to heat considerations. However, with a sealed fanless industrial unit rated for temperatures as high as 60 degrees C, users have numerous options available for mounting and installation.
Second, the sealed fanless units are typically fairly rugged, allowing for installation in the environment without the need for a separate enclosure. Aside from the physical installation benefits, there are numerous software side issues a terminal services environment can address. First and foremost, you dramatically reduce your installation footprint. Instead of setting up and maintaining ten PCs, you now have a single server in a locked, climate controlled environment for which you must maintain software versions and configuration. A subsequent question heard frequently is "what happens when that server goes down, and we lose all ten clients at once."
Once again, modern terminal services software offer a number of choices. One method of recovery includes a simple failover where a new session starts up on a backup terminal server, and the operator restarts their HMI or it restarts automatically. If 5-10 seconds of lost visibility is not acceptable, then another option to consider is what is referred to as "instant failover." In this scenario, the user session running on one terminal server is replicated exactly on another terminal server. If a primary server goes down, the user is swapped to the backup where they pick up exactly where they left off on the primary server.
Finally, consider the 3am failure of the operator workstation. How long does it take a technician to replace that PC? Perhaps you have one on the shelf ready to go. Can you guarantee the configuration is current? What about IP addresses, etc? Current thin client packages can allow for a replacement service call in less than 5 minutes. Unplug the old unit; plug in the new one. Start the new unit. Enter two or three values in a configuration screen; then reboot. The unit is now recognized as new, and the technician is asked, "Is this a replacement for another unit that now appears to be offline?" Answer yes, and the operator is back in business with the same session.
It has been our experience that terminal services projects are not sold on the installed cost but rather the total cost of ownership over the life of the hardware and software. Dramatic improvements in your ability to manage the system, along with reductions in downtime and spare parts inventory typically tip the scales once you get past four or five HMI's on your shop floor.
Give us a call to have a discussion on what terminal services can do for your facility.
Back to Blog
As is often the case in Pharmaceutical controls, a struggle exists to successfully integrate the recipe systems of the master control system with those of OEM skids. More often than not, this is one of the most difficult areas to homogenize within a given process. When you factor in that a given process line may have multiple skids from multiple vendors, the problem compounds itself even more. This may be a scenario that your facility has had, or is still struggling with.
Although there are an infinite number of ways to integrate the systems, one of the least difficult methods is that of "recipe matching". Recipe matching involves either "pushing" a given recipe down from the master control system or "pulling" a recipe up from the OEM skid and comparing the two electronically. When the recipes are compared, any differences can be either flagged as a mismatch or recorded as a process override depending on your needs and validation requirements. Since this comparison can be made multiple times during the process, it allows for initial preset matching as well as the recording of local field changes or in-process updates. A hidden benefit is that it allows your OEM equipment to run standalone, in a non-GMP manner, for maintenance and testing activities.
When applied in conjunction with a facilities electronic reporting system, recipe matching can be a great way to seamlessly integrate third party "islands of automation" into a centralized and cohesive control system. Strategies like this allow you, the end user, to leverage the most from your preferred vendors while maintaining ease of operation within your facility.
We have the expertise and experience to evaluate and apply these control methodologies as well as others to your new or existing control system. We welcome the opportunity to continue this discussion with you at your facility.
Back to Blog
In large scale continuous pharmaceutical or biotech processes, starting the system and then maintaining steady state operation isn't always the easiest of tasks. However, properly designed control systems have been utilized to perform reliable startup/shutdown operations, help Operators react and recover quickly from process disruptions, and maximize the overall rate of production. This article will highlight a few key concepts that should be incorporated into control strategies in order to keep that product rolling on down the tracks.
The startup/shutdown portion of a continuous process can be quite complicated and not something that can be easily defined through software modeling. Quite often, the startup procedure for a continuous process is derived from a mix of safety criteria, engineering design, and procedures from other plants with varying degrees of automation. Many times, during development, the startup/shutdown operations do not get the same focus as the steady state operation. Good design will account for the difficulties during these transitions and will address them up front. Control strategies that ensure safe and efficient operation have the following similarities:
Once a well-designed control system is installed, smooth and efficient startups can be achieved turning past troubles into memories and old war stories. Even better, utilizing these approaches has allowed our clients to shave hours from startup routines, saving as much as 50% of the time required to reach full operation, and allowing operations and engineering personnel to keep that train running strong.
Back to Blog
On a recent startup, a plant process engineer requested a level controller tuned to eliminate nuisance alarms. This was a simple and straight forward request so we started tuning the loop. The tank in this case was a small buffering tank which smoothed out the pulsing flow from the upstream piston-style filtration unit. As the level controller tuning started to hold a more consistent level, we noticed the downstream flow was now pulsating. This flow fed the rest of the process; therefore, a pulsing flow was not an acceptable result. After bringing this to the attention of the requesting process engineer, he agreed this was not what plant operations wanted, and we reverted back to the original tuning on the level controller.
This experience shows the importance of looking at the entire process, not just an individual unit or loop. A quick solution to a seemingly simple request would have created a larger problem. The ultimate question that needed to be answered was "did this level need to be controlled, and did the level alarm matter?" A secondary question was "would this be an issue after startup when the plant was running normally?"
There are several options to solving this particular issue. Alarm rationalization could be used to determine which alarms are actually needed and the appropriate severity of those alarms. Also, it is possible that variable alarm limits or alarm inhibiting may be required for different modes of plant operation. And, of course, looking at the total process is a key point when considering if a fluctuating level may actually be exactly how normal operations are intended. In some cases, a cascaded level to flow controller may be the best solution to achieve the desired results.
Whether integrating new processes or improving current processes, we can help you determine which solution is the best for your situation. We can help you understand each piece of the process, as well as the process as a whole so that all interactions can be understood. And, of course, we can help you weed out the pesky nuisance alarms. Give us a call, we will be happy to schedule a site visit to further discuss our expertise and how we can help you.
Back to Blog
Our automation integrators have years of proven expertise regarding process optimization techniques for chilled water systems. We use control strategies and revised mechanical configurations to incorporate energy savings and better overall efficiency. Lee White, an automation consultant for Avid Solutions, has been in the engineering and automation industry for over twenty years. He published the article "Automated Chilling Systems Reduces Energy Consumption" in Process Cooling magazine. This is one case study regarding a pharmaceutical manufacturer minimizing operational oversight and gaining efficiency and control of its chilled water plant by adding automated controls.
Contact us today to discuss optimizing your chilled water system.
Back to Blog
Through years of experience with power and chemical plant shutdowns, our experts understand that the process of successfully returning a plant to operation requires a well-executed plan, as well as the ability to respond to the unexpected. Processes that appear to be the most difficult to restart can immediately return back online without a hitch. Other processes that are seemingly simple to restart may require a unique approach or seasoned team to keep the overall schedule on track. Without a proper plan, the most mundane things seem to raise their ugly head and cause headaches of unimaginable proportions. For example, without a verification plan to ensure blanks or "fry pans" are not left in secondary flow lines, unseen problems may surface after the system has been restarted. As part of the plan, inexpensive technology such as RFID tags on every blank can be used to ensure that each one comes back out prior to starting up the process.
When faced with the unexpected, our engineers expand beyond the execution plan with a keen eye to identify anything that has changed. On one occasion, an insulation contractor had removed a few actuators to repair some piping insulation. When the contractor replaced the actuators, the butterfly valve and actuator were turned 180 degrees out of phase, causing the closed position to appear closed at the shaft and actuator hat, but actually reversing the butterfly against the backside of the seat. This caused the valve to leak at about 20% of normal flow while appearing as if the valve was completely shut. Since removal of the actuator was not part of the planned scope, the initial test plan did not look closely at the valve functionality. Expanding our view to "anything that has changed" revealed this error. When problems occur, the first question to ask is "What Changed?" Experience and the mindset of a detective will lead to solving these issues on startup.
As an integration firm with over 500 years of total engineering experience, we provide our customers with peace of mind knowing that we bring the experience of hundreds of plant startups to reference when restarting a facility. If you have an upcoming shutdown or automation system upgrade, let us bring this knowledge to your facility and assist your team with a successful startup.
Back to Blog
Thanks to high-profile cases like Stuxnet, Flame, Disttrack (aka Shamoon), and Batchwiper, security has become an increasingly hot topic within all industries. In addition to direct system attacks, various organizations are attempting computer attacks via memory stick hacks and massive DDoS attacks causing communication shutdowns.
While creating a completely secure system is theoretically possible by completely locking it down, you would sacrifice usefulness and usability of your system in the pursuit. Running a completely isolated system also leaves the system vulnerable to new and improved viruses or hacks that the existing system may not even recognize as a running process. Outlined below are some critical steps to create a more secure environment.
Segregate Your Networks
You should never have your control and business networks, or any outside-the-facility connection, on the same network. The obvious security implications are that one malicious email attachment could bring down both networks. There are also various hacks to switches and routers that may leave the control system open to the underworld of computing.
Historically, plants have relied on DMZs (demilitarized zones) to isolate the control system from the outside business networks. In light of recent attacks, this is becoming less and less trusted. A section of a network that can be accessed by both your control network and business network has been shown to be a weakness in several of the attacks listed above. Allowing the control network to store data to a system that can be saved onto DVD, CD or memory stick provides a way for information to flow between the control and business networks without speaking directly.
Deny Access by Default
Configuring firewalls between networks is something that many companies fail to do adequately. Many configurations are rushed, leaving them incomplete. The best policy is to deny all traffic by default and only allow connections on an exception basis, a concept called 'whitelisting.'
Deny Execution by Default
In addition to the denial of access, new software monitoring systems, such as Bit9, whitelist all executable files on the system with the original install. These monitoring systems can also be configured to restrict anything from running that is not whitelisted or alarm when something outside the whitelist runs on the system.
Restrict Physical Access
Simple solutions, such as locking control panels with access alarms and allowing only the DCS or PLC engineering nodes to program on the control network, can increase security and stop the infestation of harmful worms and viruses.
Securing control and business network systems should be every organization's top priority. The Avid team can help you maximize security for your system. We welcome an opportunity to meet with you and your team to discuss your system security needs.