If so, we’d love to hear from you. A long-time Brigada partner wrote earlier this week, noting that, in his org, they were trying to figure out how to insure secure use of generative AI. They were wondering what kinds of policies or standard operating practices they need. They were regretting, “If we keep the “Chat History & Training” setting disabled’ (so as to avoid the use of sensitive organizational information to train GPTs that are publicly available), this eliminates the possibility of continually training a tool in certain topics (which can be one of the greatest assets that AI offers). If you or your organization has such a policy, would you point to it in a comment below? Thanks in advance!
Recent Comments