Worried about IT security?
You should be.
We live in a scary world, and the reality of it may be more discouraging than you know.
Around the world, access to our personal information – and our very identities – is under attack. Viruses, spyware, malware, hackers, disgruntled staff, aggressive competitors…we are surrounded by electronic threats.
And when data breaches occur, it can have far-reaching financial and reputational consequences – up to and including the loss of customers and revenue.
These issues are compounded in the life science industry, where careless management of information can have regulatory compliance implications, beyond privacy measures such as GDPR or HIPAA.
In the face of such threats, it’s common for pharmaceutical or medical device IT managers to worry about the role custom or bespoke software plays in security risk management, and whether it raises a company’s risk profile.
Yes – of course it plays a role. But custom software doesn’t (or shouldn’t) live in a vacuum. It’s simply another element in your security planning puzzle.
Security must be both proactive and reactive.
Software developers should adopt a two-prong approach:
- Preventative Measures
Take all reasonable measures to prevent hacking.
- Worst Case Planning
Assume you will be hacked eventually. Use encryption and other protection methods to make sure personal or clinical data is unusable if removed from your system.
The Security Three-Legged Stool
Best practices for IT security requires analysis of three key risk factors (or stool legs, for analogy-lovers):
It goes without saying, but good software security practices shouldn’t be an afterthought. Developers must bake them into custom software from the start of the design phase.
What constitutes a strong security practice for custom software? To start, each possible connection between the software and any data should verify the correct credentials. Another key element is information security, where information is only identifiable to parties with an established need-to-know.
In some industries (pharma, healthcare and finance, for example), the implications of lax information security can be severe – and regulatory agencies are watching.
For internal-use software, tying into existing enterprise identity solutions (Active Directory, etc.) makes custom solutions more secure and more manageable.
Custom software developers have always tended to ignore security issues early in the development lifecycle. While that thought process can streamline development, it can also serve up vulnerabilities as development progresses – making them more difficult to identify and close later in the development timeline.
Custom developed software builds demand the use of security-aware coding practices – across the entire coding timeline. This includes software vulnerability tests and periodic code security reviews.
Software deployment is a task in which potential security no-no’s sometimes happen – usually as a result of developers taking shortcuts. Developers should never test directly on production data – even a copy – since all data should be anonymized. This is a significant issue when clinical and research data is present.
The code should also never be released directly into production – a good DevOps (or Continuous Integration / Continuous Deployment) system should be used to enforce repeatable practices.
Data integrity in pharmaceutical/biopharmaceuticals research & manufacturing is a growing concern, and has led to increased examination by regulators. These are some of the basics of maintaining the integrity of data.
- Thoroughly document which fields represent personally identifiable information (PII) and which do not. This is especially important in regulated industries, such as healthcare, pharma and finance – but applies to a wide range of companies impacted by regulations such as GDPR.
- Perform data validation at both the UI and server layers.
- Allow selection from sets of acceptable values, rather than free text entry.
- Only pass data from system to system over a secure channel.
- Use strong encryption instead of encoding and obfuscation.
Hardware plays a critical role in IT security – from the firewalls used to protect systems to myriad backup activities designed to guarantee data protection during unforeseen events.
From a hardware perspective, one fundamental characteristic to system security is to restrict physical access to hardware. Only properly authorized people should be able to access or modify the infrastructure.
Hardware and all platform software must be patched regularly to defend against the latest vulnerabilities. Plans should be put in place to manage the patching process in order to maximize confidence in security.
Policies & Processes
The third leg of our security stool is policies and processes. The strongest software in the world can be easily rendered vulnerable by weak or non-existent policies. (How can you tell if your policies aren’t up to snuff? The first rule of thumb: ‘password’ is not an acceptable password.)
“The Call is Coming From Inside the House”
Be defensive of user actions – don’t assume security violations only come from outside your organization. Minimizing the data exposure of each user to only the data they require can help lessen internal security weaknesses. This can also be required under certain regulatory statutes governing the protection of data records, such as FDA’s 21 CFR 11.
In a 2018 article at CSO, Webroot Chief Information Security Officer (CISO) Gary Hayslip discussed the infosec policies, documents and procedures needed for a comprehensive security program. These include:
- Acceptable Use policies
- Access Control policies
- Remote Access policies
- Business Continuity plans
We need to remember: people are the weakest link in IT security. With nearly all breaches attributable to human error, the policies and processes governing cyber security are of foremost importance. But remember, IT policies in your organization are only as valuable as their enforcement.