Creative cybersecurity strategies for resource-constrained institutions

Creative cybersecurity strategies for resource-constrained institutions

In this Help Net Security interview, Dennis Pickett, CISO at RTI International, talks about how research institutions can approach cybersecurity with limited resources and still build resilience. He discusses the tension between open research and the need to protect sensitive information, noting that workable solutions come from understanding how people get their jobs done.

Pickett explains how security teams can partner with researchers to set guardrails that support innovation rather than slow it. He also shares observations on emerging risks, state interest in advanced technologies, and the challenge of managing data across diverse disciplines.

Research institutions often lack the budget or staff of large enterprises. What creative strategies have you seen to achieve security resilience under those constraints?

There’s a well-worn phrase that gets repeated whenever budgets are tight: “We have to do more with less.” I’ve never liked it because it suggests the team wasn’t already giving maximum effort. Instead, the goal should be to “use existing resources more effectively.” No organization has unlimited time or money. Making the most of what you have requires understanding your risks and focusing your resources where they will have the greatest impact. That’s where creative strategies can help.

For example, if a security team doesn’t have enough staff to monitor every system and review every activity log, they can distribute the workload to other groups with a stake in the outcome. The security team can develop templates, define processes, and provide guidance, while individual project teams handle their own log reviews and security documentation. By having a central team set direction and standards, and provide oversight, the work remains consistent and high quality without relying on a single, resource-limited group to do everything themselves.

Research thrives on openness and collaboration, while cybersecurity demands control and restriction. How can institutions realistically balance those opposing needs?

I prefer to talk about cybersecurity in terms of what is possible, rather than leading with what security says people can’t do. The goal of security isn’t to restrict collaboration, it’s to enable it in a way that protects the organization, its staff, and most importantly the individuals who participate in our studies. They trust us with their information and maintaining that trust is essential to our mission of improving the human condition.

The balance between openness and security is often achieved through what I call “guardrails.” By creating secure environments where researchers can collaborate freely within defined boundaries, we allow openness to thrive without sacrificing protection. When working with a research team, it’s important to understand their business needs and to then provide secure options that meet those needs within an appropriate tool or workspace.

For example, a team may need to collaborate in real time with multiple organizations on files containing sensitive data. Their first instinct may be to use a familiar free cloud storage platform, even though those services rarely meet security requirements for sensitive information. Instead, we can guide them to secure platforms that offer the same functionality and work with them to acquire and configure the right capabilities.

When you understand the users’ needs and learn how they want to work, you can recommend solutions that are both secure and practical. You don’t need to be an expert in every research technology. Start by paying attention to the services offered by cloud providers and vendors. They constantly study user pain points and design tools to address them. If you see a cloud service that makes it easier to collect, store, or share scientific data, investigate what makes it attractive. Even if that particular tool isn’t appropriate, you may be able to incorporate its useful features or workflows into the secure platforms you already provide.

How do you address the perception among researchers that security policies slow down innovation?

There has long been a perception that security slows down operations. It isn’t unique to research, but it certainly shows up there. And to be fair, this perception didn’t arise without reason. Adding security controls, or layers of controls, can slow processes, delay development, and extend timelines from concept to launch.

Fortunately, there are ways to change both the reality and the perception of security as a roadblock. Early in my career, a wonderful mentor (shout out to Dave Songo, former CIO for NIH’s NICHD) who taught me that IT should always be approached with a customer service mindset. Two key principles from that guidance continue to shape how my team works and how we avoid being seen as hindering innovation.

First, understand how your policies and controls affect the work. Security shouldn’t be developed in a vacuum. If you don’t understand the impact on researchers, developers, or operational teams, your controls may not be designed and implemented in a manner that helps enable the business. Second, provide solutions, don’t just say no. A security team that only rejects ideas will be thought of as a roadblock, and users will do their best to avoid engagement. A security team that helps people achieve their goals securely becomes one that is sought out, and ultimately ensures the business is more secure.

Another concept that helps is something I call “appropriate security.” Not every project needs to meet the highest security watermark. Depending on your business, it may be perfectly acceptable, sometimes even necessary, to create isolated environments with fewer restrictions. For example, you might set up sandbox spaces in the cloud where researchers can safely experiment with new tools or workflows. You might build a development environment where developers have enhanced administrative rights while they test and create new capabilities.

As long as these lower-security environments are properly isolated from systems that require stronger protections, you can collaborate with users to create secure, purpose-built spaces that encourage experimentation and innovation.

When you take this kind of collaborative, solutions-focused approach, the negative perception of security will fade. Before long, you may even find yourself in a “be careful what you wish for” situation where more and more users actively seek out the security team because they see you as essential partners who help them innovate.

Are we seeing more state-sponsored interest in academic research, especially in areas like AI, biotech, or quantum?

We are definitely seeing an increase in state-funded research into emerging technologies like AI and quantum computing. Biotechnology has always been a focus of public investment, so I wouldn’t say interest there has grown recently, rather, it has remained a consistent state priority.

AI, on the other hand, has been discussed, developed, and used in various forms for many years. But the recent leaps in capability have driven interest to an entirely new level. I joked not long ago that it feels like one day I could barely get my home virtual assistant to understand that I simply wanted my morning commute time, and the next day I could ask, “What’s the song that goes da da da dum?” and it would correctly tell me I meant Beethoven’s Fifth.

When a technology shows this much promise and can improve so many aspects of life and work, everyone rushes to integrate it into tools, software, and everyday devices. Before long, we’ll be chatting with our refrigerators about recipe ideas based on what’s inside. States see these opportunities as well, not for songs or recipes, but for the jobs, economic growth, and improved public services these technologies can bring. One example is my home state of Maryland, which recently announced a planned $1 billion investment in quantum technologies over the next five years for future job growth. And Maryland isn’t alone, many states are making major investments to position themselves at the forefront of AI and quantum innovation.

Once the technology reaches a stage where it’s stable, scalable, and cost-effective, industry partners and government agencies will integrate it into tools, systems, and services. That’s the point when it becomes something everyone can use like a commercial product, a public-sector service improvement, or a capability that supports state and federal missions.

How can institutions implement data classification when research projects often span multiple disciplines and compliance regimes?

Data classification, often paired with Data Loss Prevention (DLP), comes with its own set of challenges, and those complexities grow when research projects span multiple disciplines with differing compliance requirements. In many ways, these challenges are addressed using the same tools and processes that support most classification efforts.

As security professionals, we’re responsible for ensuring information is received, stored, and accessed securely, and only by users or systems authorized to do so. That starts with validating identities, issuing credentials to the correct individuals, and ensuring that anyone attempting to access information is properly authenticated. After that, we must verify that the user is authorized to access the specific data in question.

However, none of those controls can function without first answering some fundamental questions. What data do we have? Where is it stored? How sensitive is it? These questions apply universally, regardless of discipline or compliance framework. If research involves data from numerous sources, each using different classification criteria or labeling processes, the complexity increases, making it harder to understand the data well enough to enforce proper access controls.

There are several ways to approach this scenario. One approach, ideal only when work has not yet begun, is to standardize on a single classification framework and have all participating institutions adopt it before any data is collected or created. Even if organizations use different internal methods, shared data should follow a common standard to ensure effective collaboration.

Another approach is to start fresh by examining all existing data and assigning new classifications that reflect current needs. This may require a tool to scan and classify the data, but it provides a clean, consistent foundation moving forward.

A third option, suitable in some cases, is to classify an entire group or environment rather than individual data elements. For example, data from one discipline might be treated collectively as “Restricted” or “Internal,” while data from another might be considered “Confidential.” This can provide quick access control without reclassifying everything. However, this approach requires care because when labeling data in aggregate, the entire collection must inherit the most restrictive classification required by any single item within it.



Source link