In my defence, I manually verify every test/calculation by hand, but so far copilot is nearly 100% accurate with the tests it generates. Unless it is something particularly complex you're working with, if copilot don't understand what a function does, you've might want to check if the function should be simplified/split up. Specific edge cases I still need to write myself though, as copilot seems mostly focused on happy paths it recognise.
I'm a bit of a TDD person. I'm not as strict about it as some people are, but the idea of just telling AI to look at your code and make unit tests for it really rubs me the wrong way. If you wrote the code wrong, it's gonna assume it's right. And sure, there are probably those golden moments where it realizes you made a mistake and tells you, but that's not something unique to "writing unit tests with AI", you could still get that without AI or even with it just by asking it to review the code.
I'm not dogmatic about test driven development, but seeing those failing tests is super important. Knowing that your test fails without your code but works with your code is huge.
So many unit tests I see are so stupid. I think people just write them to get coverage sometimes. Like I saw a test the other day a coworker wrote for a function to get back a date given a query. The test data was a list with a single date. That's not really testing that it's grabbing the right one at all.
It's just sort of a bigger problem I see with folks misunderstanding and/or undervaluing unit tests.
It's simple, really. If you don't understand what the AI is telling you to code, you'll spend five times what it would take a rawdogger to code it.
If you write the stuff yourself from scratch you know your train of thought, you know what it means and you know what it needs to be better.
Show me a head to head comparison of several coders doing the same assignment and let half of them use AI. Then we can assess the effects. My hypothesis is that the fastest one would have used AI. The slowest one wouldn't have used AI but is a crappy coder. But there will probably will be non-AI coders quicker than some AI coders.
I disagree so much. The problem with all of these takes are, that they are build on the assumption that the main skill of a software engineer is writing code. That's the same mistake a lot of "< seniors" do.
See, the important thing about engineering is to provide a fitting solution that satisfies many different domains. You need to understand and interconnect a lot of information. And the most important thing a good engineer has is "creativity".
In your example, you think about assignments as you have them in university. Arbitrary scenarios that learn you a tool (which is usually a programming language). However, that is not an assignment you are faced as an engineer.
It's not like in NCIS where someobey comes and says: "Can you make this algorithm faster?"
It's more like (an actual example from last week): can you (as in team) analyze why this legacy system failed? We have these analytics for you. We currently conduct these labs, and have these user voices. Figure out a way how we can revamp this whole thing, but this time successful. Once done create a MVP and a rough roadmap. Latter in alignment with our overarching strategy.
I tried chatgpr on something I didn't understand and it lead me down the wrong path. Ai is only good for boilerplate and finding out interesting ways to refactor imo.
Show me a head to head comparison of several coders doing the same assignment and let half of them use AI. Then we can assess the effects. My hypothesis is that the fastest one would have used AI. The slowest one wouldn’t have used AI but is a crappy coder. But there will probably will be non-AI coders quicker than some AI coders.
You need to introduce one more independent variable, which would be level of coding/ programing experience. If you really do already know what you are doing, and you know how to structure a good query, and know how to confirm if the code assistant/ AI got it right, it absolutely does make you much faster. Also, for really low level stuff, like defining helper functions, it rarely gets them objectively wrong, although they are often pretty... plain? or naive? So I would extend this hypothesis to cover the range of skill levels in individual coders and programmers. I think this is actually the striking difference. If you already know enough to code these things from scratch, it does make a huge difference.
Here is an example that I would use (prompt), and the result.
Prompt:
I want a python function to pull imagery for a bounding box in latitude longitude, for a specific year, and a specific band order from this arcgis REST server (https://map.dfg.ca.gov/arcgis/rest/services/Base_Remote_Sensing). I always want a 512 by 512 pixel tile. The layer pattern for the server is NAIP_XXXX_ZZZZ where XXXX is the year, and ZZZZ is either 4Band, CIR, or NDVI, or missing (including the underscore) for RGB. I want the data as a spatially projected geotiff. Have the function query the server first to confirm the layer options before querying.
Response:
import requests
import rasterio
from rasterio.io import MemoryFile
import numpy as np
def get_arcgis_imagery(bbox, year, band_order, output_path):
"""
Pulls imagery from the ArcGIS REST server for a given bounding box, year, and band order.
Parameters:
- bbox: (min_lon, min_lat, max_lon, max_lat)
- year: YYYY (e.g., 2020)
- band_order: '4Band', 'CIR', 'NDVI', or '' for RGB
- output_path: Path to save the output GeoTIFF
"""
base_url = "https://map.dfg.ca.gov/arcgis/rest/services/Base_Remote_Sensing"
service_url = f"{base_url}/NAIP_{year}_{band_order}/ImageServer" if band_order else f"{base_url}/NAIP_{year}/ImageServer"
# Confirming the layer exists
response = requests.get(f"{service_url}?f=json")
if response.status_code != 200:
raise ValueError(f"Layer NAIP_{year}_{band_order} does not exist on the server.")
# Defining export parameters
params = {
'bbox': f"{bbox[0]},{bbox[1]},{bbox[2]},{bbox[3]}",
'bboxSR': 4326,
'size': '512,512',
'imageSR': 3857, # Projected coordinate system
'format': 'tiff',
'f': 'image'
}
# Requesting the imagery
export_url = f"{service_url}/exportImage"
image_response = requests.get(export_url, params=params)
if image_response.status_code != 200:
raise ValueError("Failed to retrieve image data.")
# Saving the image as a GeoTIFF
with MemoryFile(image_response.content) as memfile:
with memfile.open() as dataset:
profile = dataset.profile
data = dataset.read()
with rasterio.open(output_path, 'w', **profile) as dst:
dst.write(data)
This function appears in just a second, and I don't even need to consider its details. Its well written, commented, and it just works.
If you're developing something that has been written a million times before such as a user authentication API then yes you can just let ChatGPT do it for you.
But as soon as you're writing something new or niche any LLM is going to mostly spew useless nonsense.
I've been working with Bevy a lot lately and because it's new and iterating quickly there are a lot of breaking changes between versions. AI simply isn't able to cope with that. It was trained with years old data and is incapable of adapting to the new way of doing things
Honestly thought this was a post on [email protected] as that's usually where all the memes/jokes have been posted so far, so didn't even think to check.
don't be serious, don't forcefully assert your own opinion (a little bit shrouded in irony is fine), point and laugh at people saying incorrect and implausible things
My reason is that I just enjoy computers n shit. It's nice to learn new stuff while programming so I have it in my toolbox for later and I don't just have a cursory idea of what my computer is doing.
Trite, but it is true that AI gives you an edge. Kind of blows my mind that my current company doesn't just buy all their devs a copilot license. It's an absolute no-brainer.