Bot Disclosure Best Practices and GuidelinesLast Updated: October 19, 2018
Using your bot successfully is a major part of leveraging Drift. Recently, there has been a lot more conversation, news, and legislation around Artificial Intelligence (AI) and bots and navigating the landscape can be tricky. The legal landscape around bots is something that we are monitoring actively and we will provide updates as we see developments.
The Current Landscape in Europe and the US
Under the General Data Protection Regulation (GDPR), a data subject has the right to object to automated processing of their data, and to request that the processing be performed by a human being when that processing could result in a decision which produces legal effects or significant effects on the data subject. (Article 22, GDPR). In order for the data subject to properly object, they would have to know that they are speaking with a bot, which is where proper disclosure of the bot may be a factor. It’s unclear if any of the processing performed by Drift could have a significant legal impact on the data subject. For example, if the bot vets a data subject and determines that they are not a good lead, is that a significant legal effect? Instead of trying to navigate those waters, Drift errs on the side of disclosure.
In the US federal legislation has been proposed (The Bot Disclosure and Accountability Act) but has not yet passed. The full text of the proposed bill is available here. However, the state of California recently enacted a law aimed at deceptive bot practices, and it is expected that other states will pass similar laws.
The California law was drafted with the intent of preventing deceptive practices for commercial and political bots, primarily on social media. Think bots that have harmful effects on society, either through influencing elections or even posting fake reviews to falsely impact perceptions of a product or service. However, since the law hasn’t gone into effect and we haven’t seen any enforcement actions and interpretation of the law, it’s unclear how this may apply to companies like Drift, who use bots for sales and marketing. Additionally, we don’t yet know how this law will apply to bots that send automated emails on behalf of humans.
Another interesting side effect of legislating bots and what bots can and cannot say is intersection with the First Amendment right to Free Speech.
So, all of the boring legal stuff aside, the important thing is, how does this affect the Customer and their use of Drift NOW? Below are some guidelines.
While it is unclear whether either of these laws apply to Drift, we believe that it’s important to distinguish between bots and humans because:
-The expectations of the party on the other end change depending on whether they are talking to a human or a bot
-In order for a human to object to automated processing, they have to know that the processing taking place is automated or being performed by a bot.
-Bots aren’t the enemy! Bots help you, and don’t hurt you.
Here are some things that Drift does to distinguish between humans and bots:
-When a human is typing, the elipses appear (“....”). Also, text appears after a natural delay while the human being is typing with their human fingers.
-When the bot responds, there are no elipses and the text appears without delay
What Customers can do to distinguish between humans and bots:
-Give your bot a special name (depending on your primary audience and the language they speak, it may or may not be enough to call it something like “Bot”)
-To be extra safe, you can set up your playbook so that your bot says “Hi! I’m a bot!”
-Don’t give your bot a human photo and a human name, that could potentially mislead the person on the other side of the chat.
We are proactively monitoring this space for updates to make sure that Drift is at the forefront of compliant companies. Bots are a huge help when utilized correctly and we want to continue to provide our customers with the tools and resources to configure their bots effectively and in compliance with the law.