Thursday, December 5, 2024
Google search engine
HomeGuest BlogsInterview With Bill Reed - CEO at RemotelyMe by Shauli Zacks

Interview With Bill Reed – CEO at RemotelyMe by Shauli Zacks

Shauli Zacks
Shauli Zacks

Published on: November 18, 2024
Content Editor

In a recent SafetyDetectives interview, Bill Reed, CEO of RemotelyMe, shared insights into how his company is transforming human risk management by integrating neuroscience and AI to foster high-trust, low-risk workplace cultures. Reed discussed the limitations of traditional security awareness training and the critical need to shift toward a security-aware culture, emphasizing the role of trust, workload balance, and stress management in reducing incidents. By leveraging predictive biomarkers and visual neuroscience assessments, RemotelyMe offers organizations an innovative, non-invasive approach to identifying and mitigating risks while improving employee engagement and retention.

Can you introduce yourself and talk about your role at RemotelyMe?

My name is Bill Reed, and I’m the CEO of RemotelyMe. We operate in the human risk management, security, and HR space.

Can you tell me a little about what the company does?

The term “human risk management” is not new—it’s been around for some time. Historically, it’s been applied in HR to reduce human risks related to safety incidents, disengagement, quiet quitting, productivity, and retention. More recently, the term has also been used in security contexts. Specifically, we focus on moving beyond traditional security awareness training to foster a security-aware culture. By leveraging behavioral science, we aim to reduce phishing incidents, password misuse, and other errors—often linked to disengagement—that can lead to security breaches.

How does RemotelyMe’s mission to foster high-trust, low-risk cultures differentiate it from traditional HRM and cybersecurity solutions?

Traditionally, cybersecurity has relied on what’s called security awareness training. These are often conducted biannually, requiring employees to spend 45 minutes to an hour going through training courses. However, recent reports from organizations like Microsoft, Gartner, Forrester, and the National Institutes of Health (NIH) all highlight that these efforts reduce phishing and other incidents by only about 3%. Clearly, this approach isn’t working.

We need to shift toward fostering a security awareness culture. That requires understanding the root causes of security incidents. Studies from the NIH have found that workload imbalances, stress, and low trust in the workplace are significant contributors to security issues. These same factors also lead to HR-related concerns, like retention problems and safety incidents.

At RemotelyMe, we address this by using predictive biomarkers to identify risks within just nine minutes. We uncover why these risks exist and, most importantly, how to reduce them through tailored training courses and coaching programs.

You integrate neuroscience and AI into your platform. How do these technologies contribute to better understanding and managing human risk?

This just in—people have brains.

If we agree that people are the most important resource an organization has, we need to understand what’s happening “between the ears.” That means understanding how people are wired, what they’re thinking, and what’s really going on. High workloads and stress are well-known issues, but we also need to examine trust factors.

Deloitte conducted an extensive study that found organizations with high trust levels experience 4x greater business performance, 2x higher productivity, and reduced retention issues by half. Engagement also improves significantly, leading to fewer mistakes and fewer security incidents.

To address these challenges, we need to measure trust and risk factors and help organizations improve trust while reducing risks. Neuroscience plays a critical role in this process. It allows us to measure these factors accurately without invasive methods like blood tests or other physical samples.

At RemotelyMe, we’ve developed a way to do this in just nine minutes using what we call visual neuroscience assessments.

In a hybrid or remote work environment, what are the main security challenges that companies face, and how does RemotelyMe address them?

Companies are facing a surge in incidents. Many CISOs I’ve spoken with say that after the pandemic, they feel like they’re managing hundreds, if not thousands, of data centers due to the prevalence of remote work. One major avenue for ransomware attacks is brute force attacks using remote desktop protocols.

There’s also significant Wi-Fi mismanagement. For example, remote workers using open networks at places like Starbucks leave themselves exposed to cyber threats. These challenges highlight the need for a security-aware culture, where employees understand the risks and how to mitigate them effectively.

Studies from the NIH show that individuals in low-trust environments are more likely to fall for phishing attacks. In contrast, high-trust environments empower employees to take proactive measures and do what’s necessary to mitigate security risks.

Stress and heavy workloads also contribute to these problems. Measuring and addressing these factors requires neuroscience and behavioral science to accurately identify risks and provide solutions. At RemotelyMe, we integrate these disciplines to help organizations tackle these challenges head-on.

What feedback have you received from clients about the use of neuroscience-based tools in assessing trust and engagement?

Initially, there were some concerns: Would this feel invasive? Would employees be willing to participate? Would they feel forced?

But the reality is that most organizations already require employees to complete security awareness training as a condition of employment. This is no different—if you want to reduce security risks, this should be required for all employees, especially those in high-risk roles, handling sensitive information, or working in IT security.

The feedback has been overwhelmingly positive. We’ve achieved a 97% completion rate, compared to the 67% typically seen with other assessments, and a 92% approval rating. Employees—especially younger, video-oriented generations—appreciate the format, and we’ve had virtually no pushback.

More importantly, our tools are highly reliable, with a 93% accuracy rate compared to the 66% seen with traditional text-based tests. They only take nine minutes to complete, whereas other assessments take three times as long. Clients have been amazed at the depth of insights we provide, particularly in uncovering what’s really happening within their organizations.

Many are surprised by the disconnect between their annual engagement surveys and the actual state of their workforce. For example, Gallup reports that about 80% of employees are disengaged, with 20% so disengaged that they could pose insider threats. Our tools help identify these individuals—not to fire them, but to provide the support needed to re-engage them.

How do you envision the role of AI evolving in HR and cybersecurity, particularly in terms of risk management?

There’s been a lot of concern about AI introducing bias, but the truth is that it often injects less bias than humans. For example, many CHROs we speak with note that hiring managers, despite their best intentions, tend to hire people who are just like them. This can result in a lack of diversity of thought, which is the most valuable form of diversity for organizations.

At RemotelyMe, we don’t use AI within our neuroscience or behavioral science assessments. However, we do use AI responsibly for analysis and to extract insights from the data we gather. This approach aligns with regulations like New York’s Local Law 144 and similar AI-related legal frameworks, ensuring our methods remain compliant and balanced.

By using AI in the right way and for the right purposes, we mitigate legal risks and maintain ethical practices. We also enhance our AI analysis by integrating data from sources like O*NET, SHRM, LinkedIn, and Harvard. Regulations specify that as long as AI is not the sole decision-making factor—for tasks like recruitment or employee management—and is balanced with other data inputs, it’s both legal and effective. This balanced approach allows AI to be a powerful tool for improving HR and cybersecurity risk management without introducing undue bias.

RELATED ARTICLES

Most Popular

Recent Comments