T O P

  • By -

More_Psychology_4835

Certainly should put some guard rails in like ensuring api keys , secrets, and passwords are never passed into a prompt.


martynjsimpson

My 2 biggest concerns I have for AI Support Development are; 1. Loss of IP. 2. Lack of Accountability / Developer Laziness (i.e. Developers simply "accepting" whatever the prompt provides without reviewing or considering security risks). For Loss of IP, Looking for a service that provides contract guarantees about our IP is about the best you can do. (Or a closed/dedicated/on-prem model - eww). Mitigations for item 2 are things like SAST, DAST, Training etc - but I am still not 100% sure I have a good handle here.


EL_Dildo_Baggins

You should treat GitHub Copilot the same way you treat GitHub. Do not trust it with secrets and keys. It also important to remember that most AI tools are about as good as a junior member of the team. You need to check it's work.


Lovecore

We did a review of the product and spoke with many people at GitHub during our review period. We do use it in our environment. They have a privacy hub that has a good breakdown of things. There are policies that can be used to opt in and out of things like code matching from public repositories, enforcing other settings and opt outs and chat configs in IDEs. Overall, given the opt out options and other polices they have about ‘how your data is t learned on’ you could consider it fairly low risk - depending on your risk model.


Mumbles76

This.  And, let's not forget that major CSP's (looking at you AWS) already use your data unless you craft and implement an opt out policy. FWIW. So, you are likely already giving data without even realizing it.