LinkedIn is preparing to roll out new security features to protect users from scammers posing as fake corporate executives and job recruiters on the platform.
LinkedIn is especially useful for scammers and even spies because LinkedIn profiles can contain sensitive details, including the person’s current job, work history, and a way to contact them directly.
Over the years, hackers and scammers have also been spotted exploiting LinkedIn to send fake job postings to trick victims into installing malware or tricking them into handing over their personal data. Earlier this month, security reporter Brian Krebs reported(Opens in a new window) that a flood of fake LinkedIn profiles of people claiming to be information security consultants and managers had appeared, likely for malicious purposes.
In response, Linkedin plans to introduce some changes over the next few weeks that promise to make it easier for users to detect suspected fraudulent activity. One of them includes the addition of a new “About This Profile” feature, which will tell you when a LinkedIn user created the profile and if it was verified with a work phone or email. .
“We hope reviewing this information helps you make informed decisions, such as when deciding whether to accept a connection request or respond to a message,” LinkedIn Vice President Oscar Rodriguez wrote.(Opens in a new window) in a blog post.
The “About This Profile” feature is coming this week to every user’s profile page and can be accessed through the three-dot menu. The company also plans to add it to LinkedIn invitations and messages. To verify professional emails, LinkedIn starts with a limited number of companies, before expanding it over time.
The other change is to detect AI-generated images on LinkedIn profile pages. These AI-generated “deepfake” images can produce portraits of seemingly real, but fictional people, and have quickly become a red flag that a LinkedIn account is a scam.
Recommended by our editors
An AI-generated face. (Credit: thispersondoesnotexist.com)
According to Rodriguez, the company now uses its own AI-based system to detect such deepfakes. It works by spotting “subtle image artifacts associated with the AI-based synthetic image generation process without performing facial recognition or biometric scans,” he said.
Additionally, the company is working on a way to alert users to suspicious activity occurring through their personal LinkedIn messages.
“We may warn you about messages that ask you to forward the conversation to another platform as this may be a sign of a scam. These warnings will also give you the choice to report the content without notifying the sender,” Rodriguez said.
Do you like what you read ?
Register for Security Watch newsletter for our top privacy and security stories delivered straight to your inbox.