- USDT(TRC-20)
- $0.0
This post is part of Lifehackerās āLiving With AIā series: We investigate the current state of AI, walk through how it can be useful (and how it canāt), and evaluate where this revolutionary tech is heading next. Read more here.
Almost as soon as ChatGPT launched in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? Is that ācheating?ā Or are we simply witnessing the next big technological innovation, one that everyone will either have to embrace, or fall behind dragging their feet?
AI, like anything else, is a tool first and foremost, and tools help us get more done than we can on our own. (My job would literally not be possible without my computer.) In that regard, thereās nothing wrong, in theory, with using AI to be more productive. In fact, some work apps have fully embraced the AI bandwagon. Just look at Microsoft: The company basically conquered the meaning of ācomputing at work,ā and it's adding AI functionality directly into its products.
Since last year, the entire Microsoft 365 suiteāincluding Word, PowerPoint, Excel, Teams, and moreāhas adopted āCopilot,ā the companyās AI assist tool. Think of it like Clippy from back in the day, only now way more useful. In Teams, you can ask the bot to summarize your meeting notes; in Word, you can ask the AI to draft a work proposal based on your bullet list, then request it tighten up specific paragraphs you arenāt thrilled with; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask for an entire slideshow to be created for you based on a prompt.
These tools donāt just exist: Theyāre being actively created by the companies that make our work products, and their use is encouraged. It reminds me of how Microsoft advertised Excel itself back in 1990: The ad presents spreadsheets as time consuming, rigid, and featureless, but with Excel, you can create a working presentation in an elevator ride. We donāt see that as ācheatingā work: This is work.
Intelligently relying on AI is the same thing: Just as 1990's Excel extrapolates data into cells you didnāt create yourself, 2023's Excel will answer questions you have about your data, and will execute commands you give it in normal language, rather than formulas and functions. Itās a tool.
Of course, thereās still an ethical line you can cross here. Tools can be used to make work better, but they can also be used to cheat. If you use the internet to hire someone else to do your job, then pass that work off as your own, thatās not using the tool to do your work better. Thatās wrong. If you simply ask Copilot or ChatGPT to do your job for you in its entirety, same deal.
You also have to consider your own companyās guidelines when it comes to AI and the use of outside technology. Itās possible your organization has already established these rules, given AIās prominence over the past year and a half or so: Maybe your company is giving you the green light to use AI tools within reason. If so, great! But if your company decides you canāt use AI for any purpose as far as work in concerned, you might want to log out of ChatGPT during business hours.
But, letās be real: Your company probably isnāt going to know whether or not you use AI tools if youāre using them responsibly. The bigger issue here is privacy and confidentiality, and itās something not enough people think about when using AI in general.
In brief, generative AI tools work because they are trained on huge sets of data. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train AI systems with every prompt you give them, unless the service allows you to specifically opt out of this training. When you ask Copilot for help writing an email, it takes in the entire exchange, from how you reacted to its responses, to the contents of the email itself.
As such, itās a good rule of thumb to never give confidential or sensitive information to AI. An easy way to avoid trouble is to treat AI like you would you work email: Only share information with something like ChatGPT youād be comfortable emailing a colleague. After all, your emails could very well be made public someday: Would you be OK with the world seeing what you said? If so, you should be fine sharing with AI. If not, keep it away from the robots.
If the service offers you the choice, opt out of this training. By doing so, your interactions with the AI will not be used to improve the service, and your previous chats will likely be deleted from the servers after a set period of time. Even so, always refrain from sharing private or corporate data with an AI chatbot: If the developer keeps more data than we realize, and they're ever hacked, you could put your work data in a precarious place.
Full story here:
Almost as soon as ChatGPT launched in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? Is that ācheating?ā Or are we simply witnessing the next big technological innovation, one that everyone will either have to embrace, or fall behind dragging their feet?
AI is now a part of work, whether you like it or not
AI, like anything else, is a tool first and foremost, and tools help us get more done than we can on our own. (My job would literally not be possible without my computer.) In that regard, thereās nothing wrong, in theory, with using AI to be more productive. In fact, some work apps have fully embraced the AI bandwagon. Just look at Microsoft: The company basically conquered the meaning of ācomputing at work,ā and it's adding AI functionality directly into its products.
Since last year, the entire Microsoft 365 suiteāincluding Word, PowerPoint, Excel, Teams, and moreāhas adopted āCopilot,ā the companyās AI assist tool. Think of it like Clippy from back in the day, only now way more useful. In Teams, you can ask the bot to summarize your meeting notes; in Word, you can ask the AI to draft a work proposal based on your bullet list, then request it tighten up specific paragraphs you arenāt thrilled with; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask for an entire slideshow to be created for you based on a prompt.
These tools donāt just exist: Theyāre being actively created by the companies that make our work products, and their use is encouraged. It reminds me of how Microsoft advertised Excel itself back in 1990: The ad presents spreadsheets as time consuming, rigid, and featureless, but with Excel, you can create a working presentation in an elevator ride. We donāt see that as ācheatingā work: This is work.
Intelligently relying on AI is the same thing: Just as 1990's Excel extrapolates data into cells you didnāt create yourself, 2023's Excel will answer questions you have about your data, and will execute commands you give it in normal language, rather than formulas and functions. Itās a tool.
What work shouldnāt you use AI for?
Of course, thereās still an ethical line you can cross here. Tools can be used to make work better, but they can also be used to cheat. If you use the internet to hire someone else to do your job, then pass that work off as your own, thatās not using the tool to do your work better. Thatās wrong. If you simply ask Copilot or ChatGPT to do your job for you in its entirety, same deal.
You also have to consider your own companyās guidelines when it comes to AI and the use of outside technology. Itās possible your organization has already established these rules, given AIās prominence over the past year and a half or so: Maybe your company is giving you the green light to use AI tools within reason. If so, great! But if your company decides you canāt use AI for any purpose as far as work in concerned, you might want to log out of ChatGPT during business hours.
But, letās be real: Your company probably isnāt going to know whether or not you use AI tools if youāre using them responsibly. The bigger issue here is privacy and confidentiality, and itās something not enough people think about when using AI in general.
In brief, generative AI tools work because they are trained on huge sets of data. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train AI systems with every prompt you give them, unless the service allows you to specifically opt out of this training. When you ask Copilot for help writing an email, it takes in the entire exchange, from how you reacted to its responses, to the contents of the email itself.
As such, itās a good rule of thumb to never give confidential or sensitive information to AI. An easy way to avoid trouble is to treat AI like you would you work email: Only share information with something like ChatGPT youād be comfortable emailing a colleague. After all, your emails could very well be made public someday: Would you be OK with the world seeing what you said? If so, you should be fine sharing with AI. If not, keep it away from the robots.
If the service offers you the choice, opt out of this training. By doing so, your interactions with the AI will not be used to improve the service, and your previous chats will likely be deleted from the servers after a set period of time. Even so, always refrain from sharing private or corporate data with an AI chatbot: If the developer keeps more data than we realize, and they're ever hacked, you could put your work data in a precarious place.
Full story here: