Salesforce Security Blind Spots: Proactive Protection Against Common Threats

Distinguished Security Technical Architect Rachel Beard joins CEO and Founder at DigitSec on August 31st, 2022.

The previous Salesforce Security Blind Spots session with Accenture’s Andy Ognenoff, Managing Director, Global Salesforce Security Lead with Waqas can be viewed here.

In this second session in our Security Blind Spots series, we had special guest Rachel Beard, Distinguished Security Technical Architect at Salesforce join Waqas to talk about more security blind spots and how to proactively protect against the common threats they can create.

The first topic of discussion is a security blind spot that accounts for 85-90% of attacks – The human element. This could be someone internally wanting to do something malicious, but many times it’s related to accidental misuse of data or accidental data leakage. People sometimes takes short cuts with security without necessarily meaning to do the wrong thing, but end up creating exploitable risks. One way to help prevent this is for Salesforce users to institute the principle of “least privilege” so only the right people have the right access to certain data.

The rest of the conversation had Rachel and Waqas walk through other blind spots related to:

  • The complexity of data security
  • Understanding the customizations being built on top of Salesforce
  • Access controls and roles and permissions
  • The importance of identifying data and data classification
  • There’s no one-size-fits-all and no “done” with security
  • The foundational work that needs to be completed first
 
Please watch the full session above to get insight into each one of these points, including specific examples and what you can do to mitigate the risks they create. 
 

Full transcript is below.

Security target evolve quickly

Waqas (DigitSec):

That’s wonderful. Thank you for being with us today, Rachel, and I think your experience will be very helpful in navigating the, security realm within Salesforce. I think it’s important to understand that security is always an evolving target. It’s a moving target and new threats are being, identified pretty much daily. There’s new attack vectors surfacing pretty regularly. So with that in mind, perhaps that’s a good point to start. What are some of the things that you are seeing Rachel, as far as the evolving threat landscape when it comes to Salesforce and what are some of the challenges that you see the Salesforce customers are addressing in the ecosystem?

Rachel Beard (Salesforce):

Yeah, I’m glad to expand on that a little bit. In general, I see a lot of questions coming up about the volume of attacks and the direction of the different threats that are approaching customers. A lot of times, they’re very happy to let Salesforce manage this, but they want some documentation or understanding of how we prevent attacks from things like ransomware or account credential compromise, credential, stuffing, things like that. 

Of course we have our threat intelligence team working around the clock or C-Cert teams and more to keep the data safe from outside attacks. For Salesforce customers, what tends to keep our security folks up at night would be the risk of an insider threat. Somewhere between 85 and 90% of attacks come from the human element. 

So it’s somebody who either is looking to do something malicious, a bad actor, someone who’s maybe gonna be leaving the company and wants to either make some damage happen before they leave, or perhaps take a list with them of customers and deals before they go to a competitor.

But more and more, I’m seeing risks from insider threats related to accidental misuse of data or accidental data leakage. This is happening in the wake of all the chaos and disruption that we’ve seen in the last couple of years that have led to the Great Resignation; that have led to people moving geographically and changing jobs; and have led to people, maybe changing careers entirely. 

What’s happening is this effect of new folks joining who, at my customers, maybe they’ve got some new hires that don’t quite understand the security policies or data classification strategies and aren’t handling sensitive data with care; or you have oftentimes the remaining employees who are there taking on the burden of open head count type of activity. 

They’re taking shortcuts and they’re not necessarily meaning to do the wrong thing, but maybe they download data that shouldn’t be; maybe they’re working from home on a personal machine and that data gets synced elsewhere and proliferates. So I see a lot of questions to that nature. And on top of all of that, I’m getting an ongoing amount of questions about topics around data privacy, data residency, and compliance with all sorts of global policies. 

So there’s a lot for our customers to manage. Is that what you’re seeing with your customers Waqas?

Salesforce is a complex environment

Waqas (DigitSec):

So you touched on a very important element of data security, which is the complexity. I think many times people don’t understand the levels of complexity that exists within a Salesforce environment. People think that it’s just, you know, CRM data. It’s just like contact information for customers. But over the years there’s been a lot of sensitive information, which now resides within Salesforce. We see with our customers, they’re getting more interested in addressing the core security of that platform. Before, like you said, they were happy with just relying on Salesforce for everything, but now they’ve come to the realization that they have to play their part in addressing the security challenges. We’ve seen within the industry, that 60% of the attacks for data exfiltration are preventable. They’re basically caused by, like you said, a human element or an error within the overall program that a customer has.

What we recommend as a best practice approach for addressing specifically things around data security — privacy we can touch upon perhaps a bit later –how can you create a program where you are addressing all your needs from a security perspective? Right? So the number one goal of, I think, any enterprise customer today, is to avoid a data leakage or data breach, right. While it’s like one statement, there’s a lot of things that go into that to actually address it in a systematic way. 

We’ve seen that in many instances, having sort of like a haphazard way of doing security actually undermines your goal of preventing data leakage. So what we recommend to our customers is a systematic approach when it comes to Salesforce to address your security needs. The first thing that we always recommend to our customers is just knowing your attack surface, right?

Salesforce people think it is just like one SaaS platform that your customer, you know, customer sales reps or your users log into. They interact with the data within the platform. And, you know, that’s the extent of the platform, but Salesforce over the years has enabled customers to do a lot more than just manage some data. Salesforce today allows you to open up your platform to even anonymous internet users, using the experience cloud. 

Similarly it allows you to expose your platform as a program interface through the use of apex web services and things of that nature. So the first thing we really want our customers to understand is knowing your attack surface. Given a Salesforce environment, how have you customized it? What are you exposing? And then knowing what type of data are those exposed endpoints interacting with? 

For example, if you have an external interface to a critical piece of data, for example, your IP or sensitive PII, what are the controls that you have in place, or is it even a good idea to actually interact with that piece of data from the outside?

So things like that, we really want our customers to take to heart and do this exercise of understanding the customization that are built on top of Salesforce. Then understanding the data that is being manipulated, to then actually take the steps to identify the threats against those and protect them. So it’s a multi-tiered approach that we recommend, Rachel, because one size doesn’t fit all. We can give them the best practices. Like, “Don’t do a permission”, “Don’t do X, Y, Z”. But at the end of the day, they’ll have to do this exercise of understanding the threats that they have and then what controls they have to mitigate those threats. 

But one thing that we always want our customers to take to heart is this principle of “least privilege” within Salesforce. Because that, I think, is something that’s often, overlooked. Because they see this as an additional step to customize things. What are some of the things that you see, Rachel, when you try to ingrain in a customer that? What is the principle of least privilege and how does it fit into this big picture of making sure your data is secure?

Make sure you have a good strategy around data classification

Rachel Beard (Salesforce):

Least privilege is probably one of the phrases that I say most often during the course of my day. I work with all different customers from different industries, different sizes, different levels of maturity and how they run their governance and security operations. I think that several years ago there was a big trend towards transparency. 

If you did good hiring, let’s have a lot of data accessible within the organization. And that led to an expanded surface of risk because the more data that’s available for any given user to view, the more that they could potentially download either on purpose or by accident, if there is an incident of account credential compromise, the more that that attacker could see when they’re impersonating that user. Just the more at risk from either exposure of PII or sensitive, competitive or confidential information in general.

So I spend a lot of time coaching Salesforce customers on their security strategy and access controls is ALWAYS where I recommend that they start. That’s super important. Number one, make sure that you have a good strategy around data classification. A lot of companies, especially because of turnover, you could have an admin walking in owning a Salesforce org that they did not design or implement, and they may not even be aware of all the different types of data that’s stored in Salesforce and what the sensitivity level is. 

So I HIGHLY recommend that you spend some time identifying what sensitive information is being stored and using the out-of-the box data classification feature in order to tag that data with its corresponding sensitivity level and any kinds of compliance, categorizations that go along with it. Once you know, where you have your most sensitive data elements, again, not just because it’s regulated or industry protected data, but confidential, restricted internal information as well.

You’re going to want to add on some additional security controls. If there are certain objects that you know are gonna contain sensitive data, we wanna pay special attention to making sure that the Org-wide defaults are set to “private” on those objects and that you don’t have too much sharing rules, too many sharing rules that could really open up the sharing. Make sure that records are just shared with the folks who need to see them. 

If you have good profile administration, you can make sure that you turn off the access to certain objects completely for some types of users. For example, if you’re not a support agent, you may not need access to cases. If there’s gonna be sensitive information in the cases, let’s make sure that that’s not available to everybody. And then getting really granular. We can think about field level security. So taking that data classification and saying, these are our restricted fields, we’re gonna make sure that this data is not visible to all users, just to the users who absolutely need to see that data to do their jobs.

You need to be very thoughtful about this process and ruthlessly eliminate access to anybody who doesn’t explicitly need sensitive PII data. That goes for your sandboxes as well as for your production environments. I do want to make sure everyone understands that data classification has been available for a couple of years. 

It’s on all of your fields, your standard fields and your custom fields. So you can go to any field that you have right now today, click edit and update those properties. They’re also available through the metadata API. So, really important to get a handle on that. And then Waqas said, something really interesting earlier, which was about the anonymous types of community users. This has been a bit of a challenge for some companies that I work with as well. So it’s important to not just make sure that you have strong permissions and role delineations and things like that internally.

But if you have ANY public sharing, make sure that you’re doing a really tight job of controlling and, uh, constraining access to who needs it within the communities. Then on top of that, there’s a whole bunch of profile and permission set type of functions that I like folks to keep track of, and that you really may need to keep track of for compliance purposes. For example, things like SOX [Sarbanes-Oxley] compliance. So making sure that folks aren’t over permissioned or getting new permissions, like “view all,” “modify all” data, making sure that you do have a couple of admins with that break glass functionality to bypass single sign on so that they could get in if you had an SSO outage. But when are they actually using that? Are they abusing that permission? And are they even using that in sandboxes and so forth?

Making sure that if users don’t need to export data, that they just don’t have have export access on their profile or permission sets. If you are doing things like encrypting data, [then] limiting, who has access to managing encryption keys. So there’s a lot of different permissions, exactly what Waqas said that a ton of complexity. So it’s really important that you have a good solid strategy around this, that you have good documentation around this and that you continue to revisit as your needs change and as that compliance and security landscape continues to change. 

So Waqas, again, you you’ve brought up that complexity. I’m clearly seeing a lot of complexity out here with my customer base as well. Do you have any strategies that you find especially successful?

The whole system can fall like dominoes

Waqas (DigitSec):

I think the thing that you mentioned around data classification, I think that’s a “must have”. Because what Salesforce has been doing, and what is actually a very good step, is if you do the exercise of classifying data, for example, let’s say you mark something as PII. 

Recently I think with this new release, Salesforce is creating some custom rules around, “Hey, if some something’s already been marked as PII or sensitive, perhaps if an admin goes and creates a sharing rule which opens up that object. then that event should be blocked, right?” So there’s controls being put in place to kind of avoid mistakes. But they’ll only be effective if you actually have gone in and done your work and actually classify your data, because then you can turn on these flags, which give you added protection.

Because one thing that you want to implement with any security strategy is a layered defense towards making sure your environment is secure. If the only layer of defense is that you’ll always have not granted one permission set. What if somebody has granted that permission set, right? 

Your whole system falls like dominoes. What you want is, basically, this layered approach to addressing security. Because there is complexity and if you take this standard approaches, like, “Hey, we’ll just, you know, lock everything down through our profiles”. And that’s the only approach that [you] will take, that’s going to be limiting and that’s going to be one single point of failure. So what we recommend really is to understand, what value you get out of a control. Then look at how effective it is, and then make sure that that control is not going to protect you against everything.

For example, we deal with a lot of customization that is done on top of the platform. So folks writing custom apps, custom solutions in apex, lightning, web components, and creating custom flows and things of that nature. So while some of these would be protected by having a strong principle of least privilege, not everything will be right. 

For example, Apex, by design, will run with System permissions. So it’ll be the job of the developers to ensure that they’re actually enforcing the roles and permissions within their code, by making sure that that code is only accessible to certain profiles. Also if somebody wrote that code, the code actually validates that this user should be able to manipulate this piece of data or not. So again, the challenge really becomes, with Salesforce is understanding the environment that you’re in and doing a really good job of understanding your data and classifying it really well.

Then you can create these layers of controls for, let’s say, you know, standard access, customizations integrations with third parties, connected apps. So there are many elements that whenever you open your platform up, you should know what is the security implication of this. So one of the things that we’ve already recommended, a lot of our customers is as they’re developing, is always have a bullet point of, “what is the major threat against this feature and how are we mitigating that?” 

So getting them to start thinking about creating some basic threat models around the customization that they’re doing, based on their data type is an exercise that we always recommend our customers to do. It helps them understand their environment better and also put clear controls in place that are more of a layered approach than just one size fit fits all kind of implementation. So going off of what you mentioned, like is the sort of it must have, right. 

Would you say that an accurate statement is like “Not having data classification is perhaps a very risky way of operating in the Salesforce world.” Would that be a fair statement, Rachel?

Organizations needs to map out their strategy

Rachel Beard (Salesforce):

Oh, yes. I agree. I’ll give some folks a pass that they don’t have it implemented in their Salesforce org IF they have an actual documentation of what data is being collected and why. But it’s astounding how many organizations I speak with that don’t really have that mapped out. Therefore, when they have a compliance request or any kind of security requirement, they have to go figure out where this type of data is stored and that is gonna slow everything down and make it easier to miss something in the process. So it’s much more risky.

Learn if you’re doing the right thing by measuring

Waqas (DigitSec):

So I think it is a risky environment that you’re operating with. And I think with security, you want to have an approach to addressing security. It’s like, if somebody says like something is 100% secure, that statement is very hard to absorb by anybody who’s in the security industry. But if they say that we have a program to addressing security systematically, that is what some companies do well versus some who don’t right. If you don’t approach security the right way. 

So one of the things that you can see if you’re doing the things right or not, is to measure. Once you have, let’s say the data classification, you have a good idea of your access control and your visibility and your sharing model. How do you know it’s working? Have you seen some good examples of how people do that? Well, or what are some of the things that they should be measuring to see if their program is effective or not?

Customization can introduce vulnerabilities

Rachel Beard (Salesforce):

So just like you mentioned, there’s no one size fits all approach and there’s certainly no “done. “We’re never “done” implementing security. It needs to continuously be monitored and adjusted as conditions change and as new vulnerabilities and threats emerge. So for Salesforce customers, what I really like to see is that they have a strong understanding of secure coding principles because that’s one of the bigger threat vectors is that everything that you’ve stored with us is secure. 

But when you start customizing on top of that, you could introduce some vulnerabilities. So making sure that you’re using tools for code scanning and things like that. So certain vulnerabilities are not introduced. On the other side of that then is what are your users doing? So how do they handle that sensitive data? Again, data classification, field level security, and all that stuff is foundational.

When is it possible that that sensitive data is being handled in bulk? That’s a lot more risky than someone looking at a record. It’s like, someone’s looking at a whole list of sensitive information that they could be exporting or even screenshotting [or] printing, et cetera. That could be an API action; it could be list view, could be reporting. 

It’s important to get a handle on that. What I like to recommend is our event monitoring, which is part of Salesforce Shield, because that gives you not just the bulk access to data in real time, but it can get very granular down to the page views, which is really important. If you have any kind of competitive or sensitive information or anything that needs an extra layer of protection because of trust, trust with your stakeholders. And I’ve got some really interesting examples, because I work with all kinds of customers.

Of course you can imagine the sensitive data that our financial services or healthcare and life sciences customers manage. But I also get to work with some really cool media companies, political organizations, et cetera. They need to make sure that somebody’s not just searching in Salesforce for a person of interest, a celebrity political figure, et cetera, to find an address, find a phone number, find their assistant’s contact information. 

So it’s really important that once you’ve set up those access controls to minimize who can see what, that you monitor to know, “is it working?” and “who actually views what kinds of information?” With event monitoring, you do get searches as well as click throughs; you have access to reporting information, files, whether it’s sharing, preview, download, API access; admins logging in as somebody else as a way to see information or make changes under the radar.

There’s over 70 different events, some of them in real time and some of them in daily and hourly blocks. On top of just getting the logs, you need to be able to visualize the logs so that you can understand when you have normal baseline expected behavior. When something is happening that is outside the bounds of what you would expect. 

For that, we do include some analytics, but by and large, the customers I’m working with are taking those logs and sending them into a SIM solution. And that way they can aggregate the Salesforce user behaviors with all the other security behaviors; all the other user activity that could spell out that kind of profile for an individual and then being very proactive. 

We do have threat detection now, which runs in the background and looks for signs at the individual level for things like credential, stuffing and session hijacking, but also anonymous behaviors around reporting access and API access.

So it may be that you have a user who is interacting with different types of data than they normally interact with; maybe from an unusual location. Maybe they’re working at an unusual time of day at that individual user’s baseline pattern. So we can send you some anomaly events based off of that. And then finally, transaction security is extremely important for being able to alert in real time and or block in real time. Some of those riskiest behavior patterns. 

I’m typically thinking of things like “somebody is downloading a large number of records,” maybe it’s okay that they’re looking at the all contacts report, but that they shouldn’t be downloading all contacts or all opportunities, or, going back to data classification… Data classification works with transaction security now. So you can say, “I want to prevent a user from even building a report that has this information because they shouldn’t even be able to preview that data in bulk.”

That’s really powerful! To be able to block it in its tracks and get an alert and then have all the logs. So you can go back and identify that user. So when I think about all this going together, because I realize there is so much complexity here. I really like to think about, as a big picture, best practices around this end-to-end security monitoring. I like to think about one, “How do we control authentication?” 

“How do we make sure we’re only gonna get the authorized users in there and not unauthorized?” Next, I really wanna think about data classification. “What type of data is being collected?”; “How sensitive is it?”; “Who should have access to it?” That should help inform some of your roles, profiles, permission sets, sharing rules, field level security… all of that. To make sure that your users just see what they need to see to get their jobs done and know more.

Then I’d really like to think about the monitoring and making sure that we understand “What is the baseline?” “What do we expect?” “What’s a normal volume for reporting and page views and things like that?” And then I like to layer on after that some alerts. So I know I can investigate right away when something’s happening, not months later when we’re losing customers to a competitor and we have to figure out why. 

We want to know right away, if someone’s doing those downloads and then the final step… well not really final, but in this timeline… would be putting those preventative measures in place. Once you have a good sense of what’s the baseline and how far off the baseline it needs to be before you block. Above and beyond that, I have a ton more security best practices related to monitoring multiple orgs at once related to managing sandbox security, managing some of the data privacy policies and data subject rights.

So this is an ongoing process and my team of security architects help customers with this kind of security strategy. So that’s something that we can potentially discuss at a future time. But if anything, I think Waqas and I both have drilled in data classification. And if you looked at this at the start of data classification in Salesforce, it was really just an FYI documentation exercise. That seemed a little bit futile to most people. It didn’t make a lot of sense to do it, but over time we did expand it. So it’s reportable. 

Now, if you have an internal or external audit and need to produce a list of data elements that are sensitive or related to GDPR or CCPA and so forth, you can extract that now. And like I mentioned, it’s now usable within other solutions like Event Monitoring, transaction security, like Data Detect in our Privacy Center and Security Center. You’ll see some alert icons when you’re managing sensitive data in those policies. So I would expect this emphasis on data classification to only expand. It certainly won’t go away as long as we have emerging compliance and privacy policies in place.

Foundational work needs to happen, right?

Waqas (DigitSec):

I think you are hitting the nail on the head: Foundational work needs to happen. Right? Unless you understand your data and classify it, it will be a whack a mole type of exercise if you’re trying to do security. So it has become this critical fiber within the security program of knowing your data. I think, like I said, there’s more controls that are being built on top of that assumption. So if there’s one key takeaway from this session it’s “know your data and classify your data.”

Rachel Beard (Salesforce):

“Know your data.” Exactly! Waqas, do you have any insights around techniques that you’re seeing for security monitoring?

Data and how it’s manipulated in the platform

Waqas (DigitSec):

So we’re seeing, like you mentioned, a lot of emphasis on data. Like the way data is being manipulated within the platform. So your assessment of how this is being implemented is with the focus on data. But we are also seeing there seems to be a realization that Salesforce is not only accessed via the web interface; it is also being accessed by third parties and APIs and connected applications. So we see there’s some interest in making sure that only authorized third parties have access to a Salesforce environment. 

The APIs are being monitored closely and the API is actually being disabled where it’s not being used or utilized. That’s an area where we see a lot of interest in making sure that they’re, while they may be monitoring the users, they’re also monitoring the machines.

And that’s where we see a lot of work being done now, is making sure that you may have your permissions and your profiles locked down. But then you have a connected app that has full access and you give that app access to everybody in your user base. So you kind of effectively created a work around through your security model. So we basically work with the customers in understanding third party risk from that perspective as well. And that’s an important element of security, monitoring in an environment.

So I know we’re close to the time. We can open up for question/answers if there are any. We’re happy to address those.

Rachel Beard (Salesforce):

I’ve had such a great time chatting with you. I have not kept an eye on the chat or Q & A, so let’s pop that open and I’m sure Andy’s curating some good questions for us as well.

Discover your team’s Salesforce hidden security vulnerabilities by contacting us and talk to Waqas!

Q&A

Andy Montoya (Host):

Yes, absolutely. Thank you so much, Waqas and Rachel. This has been amazing information and we definitely have some questions coming through, which is always great to see. So we’ll go ahead and start at the top. And first question is, “Can you comment on the ability for Salesforce to block the download of data to a local drive, but allow the download of data to a dedicated network controlled access folder?”

Rachel Beard (Salesforce):

Oh, you know what, I have not heard that question before. At this time it’s on a transaction per transaction basis and what you can do out of the box is block the download. So the operation… You’re working off of the reporting event and the operation on report could be something like, a report being run; a report being run from mobile; or preview; or scheduled; and report export, of course, is the most frequently requested one. But it’s managing the report export, not the destination. I don’t know of a way to monitor that from Salesforce today.

Waqas (DigitSec):

I think if the goal is to stop somebody from taking your data to a specific device, that can only be blocked at the onset, right? Because once you’d give them a download to a specific network that they have access to, then they can probably get it down to their drive as well. So if you want to really have a protection, that means somebody not being able to download any data. 

That would be a more effective control, but I could see a, a use case where you allow that data to exist somewhere which is more secure. So I don’t really know, outside of creating a custom solution with apex or something like that, that something is available out of the box.

Andy Montoya (Host):

Thank you both! Next question! Shield encryption allows us to encrypt at rest. What exactly does encryption protect against? Someone who gains access to a Salesforce data center and accesses the backend database?

Rachel Beard (Salesforce):

Yes, that’s basically the threat vectors that folks are addressing when they’re encrypting data… would be the concept of an intruder entering the data center. And there being an event that was a theft of storage, which of course, we have plenty of controls to reduce that risk. So everything from the physical security on the premise to even concrete boundaries to the data center. So someone can’t just drive a truck through a wall. Everything on the inside is locked through biometrics and prox cards and all kinds of combinations of techniques there. 

Then we also follow NIST data disposal methods. So you will never find a disc in a dumpster. And then on top of that, the other threat that folks might be concerned with would be one of our DBAs interacting with data. And this is one of my favorite topics. I could do an hour on this answer, so I’m gonna control myself and keep it short.

The smallest part of this is that we have this metadata driven architectural model where the data is stored physically separate from the metadata. So if one of our DBAs, they only have access to one or the other. If they’re looking at the actual data, it’s not defined with the customer name because of multi-tenancy, it could be any customer, there’s no object name, there’s no field label. So it’s completely unreadable from that perspective. 

There would need to be collusion for them to bring those elements together and isolate specific customer data, which of course we also monitor for. So, for that reason, the majority of Salesforce customers may not believe that they need platform encryption and I’m not gonna try to convince you that you do. I see the customers who implement platform encryption are typically doing so when they have something so sensitive that even having a DBA, see it unidentified /de-identified from other data is unacceptable.

Perhaps something like a world leader’s passport number is definitely a case, or even I saw one with kids in the foster system and understanding their placement in homes. Super sensitive information to show that you’ve applied every security measure. You’re not gonna skip any steps; you’re gonna encrypt the data. And the other big reason that I see is for compliance reasons, there are lots of compliance policies out there that, uh, explicitly require that the data be encrypted at rest. 

And typically along with that, that the keys are managed by the company. So outside of Salesforce, you would wanna be able to control the life cycle of your key material. And those are, more frequently, the reasons why I see encryption applied,

Waqas (DigitSec):

I can speak to the best practice element of this is as I was mentioning before, it’s the defense-in-depth type approach. So Salesforce has controls in place to make sure nobody walks out with a hard drive from their data center. But in the event that unlikely thing happens… If the data is encrypted. it is much better than if data is clear text. Similarly, Salesforce has this permission set that certain users can only view encrypted data. 

So, if you use this permission set rigorously throughout your enterprise, then you can actually have an additional layer to the data security. Right? So, if somebody does get access to that object, that data’s gonna be encrypted unless they have that permission set assigned to them. So it’s, again, like a defense-in -depth strategy that helps you enable encryption. But it can be argued there’s a level of trust that exists when you use encryption within a SaaS platform, right? Because at the end of the day, the keys are there, encryption is happening there. 

So if there was a catastrophic event, then things can be decrypted, right? So that is something that you have to understand as well is like, it’s not a silver bullet solution that would, you know, the data is never gonna get decrypted.

Rachel Beard (Salesforce):

The other thing I would add is that on top of the compliance topic, demonstrating security and technical measures, even when it isn’t explicitly required, but showing that you have some compensating controls is frequently a tactic that I see for some of the areas where it’s fuzzy and it’s not explicitly required. Then that expression of we are a trusted provider and we’ve taken every step we can to keep your data safe, goes a long way with your stakeholders. So there are reasons to consider it, for sure.

Andy Montoya (Host):

Thank you both so much. Next question up is quick question on Salesforce deployments from sandboxes. Is there a way to limit the number of admins who can make changes slash have access to the deployments?

Rachel Beard (Salesforce):

I would say yes, but I’m gonna defer to you, Waqas, for more color on that one.

Waqas (DigitSec):

I think you can probably limit them by just not having them as admins or delegate some of that permissions. Because whenever you’re interacting with the metadata API, you need modify-all, basically. So whoever’s doing deployments actually has complete access to that environment, so there’s no way of limiting that until you take that permission away from certain users.

Andy Montoya (Host):

Thank you. Waqas. Next up. Can you recommend a good App Exchange product that aggregates across profiles and PS’s the total number of users that can access a Salesforce field?

Rachel Beard (Salesforce):

Yeah,, there’s a lot of good solutions out there through that partner ecosystem. I would definitely recommend looking at OwnBackUp. They have a solution that helps decode who sees what and why. That will be really beneficial for trying to get to the bottom of who might be able to see any type of information and another great partner that I like to recommend is App Omni. They also have great tools that can help you understand the visibility of data in your orgs. So that partner ecosystem is rich with additional tools that you can use for those types of purposes.

Andy Montoya (Host):

Thank you, Rachel. Next one up. Can you automate data classifications for fields or does that have to be done manually per field?

Rachel Beard (Salesforce):

So typically I see it being done manually is the most common way. It can be done through the metadata API, uou can update the field properties. We do have a solution called Data Detect, which is bundled with Salesforce Shield right now. And with data detect what we’re doing is allowing you to scan data in your org; look for patterns that would indicate sensitive information. So something that looks like a social security number; credit card number; an email address; something like that. 

Then the tool can present you with the impacted fields and also the data classification screens. So that way you could very quickly identify where that highest level of sensitive data lives. But today there’s nothing on the Field Create screen, for example, that would allow you to create that data classification when you create a field. So it is, it is an added step outside of that data detect mechanism that I mentioned.

Andy Montoya (Host):

Okay, great. Thank you, Rachel. Next one up, can you touch on access community permissions besides what they can see from the portal itself? What can they see that isn’t so obvious?

Waqas (DigitSec):

So I think the biggest thing would be if you have any custom controllers. So these are your apex controllers. We see often, even if you have an apex web service that you’ve exposed, that is all available anonymously on the internet, right? Sometimes developers don’t understand the implications of that. They think that it’s an end point that they’ve defined, they’ve given it a name, therefore nobody can get to it. 

So it’s a very dangerous assumption to make if it’s exposable, it’s available to anybody. That’s an area that we see a lot of potential malicious activity is like there are known solutions which deploy at certain endpoints. So folks are always poking at those to see what data they can get out of that. So that’s an area that we see quite often overlooked, when you’re creating an Experience Cloud solution. Is that any controller that you exposed, that is actually invokable by anybody on the internet, right? And if it’s gonna release some data to the end users and the assumption was that only your solution can call to it. That is a very dangerous assumption to make.

Andy Montoya (Host):

Thank you, Wakas. Next one up, we’ve got just a few more here. Can you recommend how to block the use of certain Chrome plugins like Salesforce inspector?

Waqas (DigitSec):

I think the on the client side, it’s a very hard to implement controls unless you have the control over the device itself. So your enterprise may have policies that they can enforce on the browser. That would be the best way to block certain extensions and add ins within the browser.

Andy Montoya (Host):

Thank you. And one more here. I would love to hear more about better managing sandbox security.

Rachel Beard (Salesforce):

This is another one of my favorite topics! Because, I came from the space of being a systems and integrator, and I was consulting and helping build out different Salesforce implementations. And here’s what I can tell you. You have to have access to different types of sandboxes. You have your developer and dev pro boxes, and those are largely used for building in isolation with metadata only no data, um, you’re building, and then you’re gonna merge later, maybe merge into a dev pro, and maybe you wait to merge into your partial, but either way, you’re typically building in those dev boxes. 

When you get up to that partial and full copy sandboxes. Now you’re dealing with a complete copy of metadata, but also production data. And you’re full, of course, it’s a complete copy or partial. It’s a subset of records, but in any case to get a really effective U a T cycle through, in order to do your staging in order to do performance testing training, you need access to a production like environment, but you may not be able to show that production data to those users, even if it’s training or a U a T user who may have access to the production data, especially if it’s sensitive information, it may not even be appropriate for them to be viewing the data in that context.

And definitely if you have third party users, contractors, and more, you do not want them seeing confidential or competitive information. So again, we go back to that first layer of access controls. Number one, how do we protect logins to this org? Do we have tools like single sign on, in place to make sure that we limit who has access to these environments, um, and that we don’t have users logging in inappropriately second, have we changed the sharing when, you know, or the profiles and things like that in the sandbox where I tend to see a lot of rigor around managing the sharing model in a, uh, production org. 

I see in sandboxes that a lot of users get super user permission, even system admin permission, uh, they have access to view more and see more, uh, view, more, edit, more data than they would be able to in production, which of course widens that area of risk.

They have more data that they could exfiltrate. So we wanna make sure that you continue to limit access in that full copy sandbox. Um, that means resisting the urge to give folks more permissions, um, and being more rigorous about that. And also monitoring who is making changes in those sandboxes. Are they following your correct deployment path and, um, who is accessing what in that sandbox and event monitoring can also help with that? 

Then I get to the next questions of, do we have some truly sensitive information that just shouldn’t be used in this context? Um, maybe you have those training users, like I mentioned, or U a T users, and maybe they’re pulling up a customer record and they don’t need to see real addresses, phone numbers, purchase history and more. So then we wanna start thinking about obfuscating data. Some customers are running a script in the developer console to delete or transform that data into asterisks or something else just to make it clear that that data has been eliminated and is math data.

We do have data masking tools at Salesforce as well to help automate that process so that you can set up a pattern to anonymize or pseudonymous or delete or replace with a pattern to make that process a lot easier. Um, so I definitely recommend all of those steps to be evaluated. And again, that data classification is gonna inform all of that as well, because you’re gonna want to obfuscate your most sensitive data elements. And then from there again, it’s, uh, using your tooling for the deployments to make sure that you don’t have the kinds of issues of somebody pushing something that wasn’t ready, wasn’t approved and could contain vulnerabilities. 

So strong testing processes, strong governance, and, uh, making sure that you reduce the risk of collisions or introducing bugs by having good tooling processes in good testing processes, which is, again, another reason why you need a full copy sandbox at a good staging environment. We just wanna make sure that we’re using those environments safely.

Waqas (DigitSec):

One thing that I’ll add, and Rachel did a phenomenal job explaining the importance of making sure that the sandboxes are taken care of is like sandboxes basically have the same threat vector as production, right? if from an attacker perspective, they tend to target lower environments more because they assume that the security will be laxed in those environments. So technically speaking, you should be more concerned there than sometimes in production because they are the primary target, uh, for potential malicious activity, whether it’s internal or external, because they know it’s gonna be LA there.

Rachel Beard (Salesforce):

Agreed. That’s an often exploited threat vector because of the fact that folks assume it’s not going to be monitored.

Andy Montoya (Host):

Thank you Waco. And thank you so much, Rachel, for such a thorough answer. And with that, we are done with questions. Thank you everyone for being here with us, just a couple of notes. I think the presentation was very well received. Thank you so much for a great session. And someone commented that they heard the best explanation of encryption benefits and surfaces ever. So that is great. Any last parting words from OCOS and or you Rachel before we head up?

Rachel Beard (Salesforce):

No, thank you. That compliment made my day. I really, really did. Um, no, I, I really hope that you take the time, uh, to, to spend your cycles. I know it’s so hard when you have so many other things you wanna do and, and add to your org, but really spend that time to reevaluate. What do you have in place and is it working because I guarantee you roles have changed. And the way that folks have worked has changed. 

So spend, spend the time on that. We do have some great materials on Trailhead, around shield, around Salesforce security basics, and plenty of other topics. So I would encourage you to check that out for event monitoring in particular, we’re up to four different badges now, including one on transaction security, you can get your own environment and test it out yourself. Um, so I would encourage you to do that as well.

Waqas (DigitSec):

Same what like to echo everybody taking the time out and appreciate you spending the time with us and thank you, Rachel for being a panelist. And we enjoyed the conversation and hopefully you all, you know, were able to gain some benefit from this and we’ll look forward to the continued conversation.

Andy Montoya (Host):

Sounds great. Thank you all. Thank you, Rachel. Thank you.

Rachel Beard (Salesforce):

Yeah, thank you. Have a great day.

Waqas (DigitSec):

Thank you. Take care.

Want to learn more?  

Find out how InCountry saved over 1000 dev hours and released their app ahead of schedule. 

Your host and presenter

Andy Montoya (Host):

Hello everyone. And good morning. And thank you so much for being here with us today. My name is Andy and I am on the marketing team here at DigitSec. I am very excited because today we have two seasoned security professionals who are gonna help us understand more Salesforce security blind spots, and help us proactively protect against common threats. And without further ado, I’ll go ahead and pass things off to Waqas and Rachel to introduce themselves.

Waqas (DigitSec):

Hello, good morning everyone. My name is Waqas Nazir. I’m the founder and CEO of DigitSec. I’ve been doing application security for the last 20 years, specifically for Salesforce in the last decade. I’m excited to be here today to speak to you all about proactive measures you can make to make your Salesforce environment more secure. So I’ll move it to Rachel.

Rachel Beard (Salesforce):

Hey everybody. It’s great to be here today. Thanks for joining us. I am Rachel Beard. I’m a Distinguished Security Technical Architect here at Salesforce. I’ve been working in the Salesforce ecosystem for about 16 years now and about half of that has been directly at Salesforce managing platform security topics. I help customers every day understand the different layers of security that Salesforce provides for you so that you don’t have to manage it or maintain it anyway. But I also spend a lot of time on the shared responsibility and controls that you can configure to keep your data safe and prevent risks of data as exfiltration.

Andy Montoya

Andy Montoya

DigitSec

DigitSec brings four scans to protect Salesforce: Source Code Analysis, Custom Runtime Testing, Software Composition Analysis, & Cloud Security Configuration Review. #DevOps

Recent Posts

Sign up for our Newsletter

Get security tips sent to your inbox.

Sign up to get updates and security insights from DigitSec