I'm in the building sciences. The biggest unanswered question we come up against almost daily is "what the fuck was the last guy thinking?".
And we avoid, daily, admitting we were the last guy somewhere else.
Trying to prevent bacteria from developing antimicrobial resistance. At these rates in 30 years antimicrobial resistant bacteria are projected to kill more people than cancer.
I've been around the AMR space for a while, but only as a collaborator. Have helped do some bacterial assemblies and help find methods of detecting ICE. I'm a bioinformatician so I get to jump onto a bunch of different projects.
AMR is scary and not really in the public knowledge of upcoming issues. I think about it every time my son had an infection while he was very young and hope he didn't get a resistant strain.
How much of this resistance is down to feeding livestock antibiotics compared to doctors over-prescribing to people, or what is the cause do you know? Is there any way to slow down the rate?
The level of AB use in livestock in various countries is astonishing.
Most european nations have to keep a very strict log of which antibiotics are used, and for what reason.
Meanwhile, until recently India was using Colistin as a growth promoter.
I think there are so many new and great ideas in this space but you have to consider how science is funded. Funding bodies and reviewers want incremental research that is safe. This has led to our current situation.
Phage therapy has been around for so long but is only in the last 10 years gained creditability and treated as a path to take.
Ultimately, antimicrobial resistance is incredibly solvable even at a policy level and definitely across many scientific levels. But it requires more cooperation than farms, pharmacies, hospitals, states and countries can muster.
Well, there's counterfactual examples of this, so it must not be true.
In pretty much every single relationship worldwide, one person can very easily determine if the recommendation from the other for where to eat or what to watch is correct or not.
And yet successfully figuring out where to eat or what to watch is nigh impossible.
Isn't it proof enough? Using the Sudoku example: there are certainly different levels of difficulties, depending on how many numbers are set in the beginning and other parameters. Checking if the solved answer is correct, is always the same "difficulty" - thus there is no correlation between the difficulty of the puzzle at the beginning and checking the Correctness. Some people might not be able to solve it, but they certainly can check if the solution is right
Unfortunately no. The question is a simplification of the P versus NP problem.
The problem lies in having to prove that no method exists that is easy. How do you prove that no matter what method you use to solve the sudoku, it can never be done easily? You'll need to somehow prove that no such method exists, but that is rather hard. In principle, it could be that there is some undiscovered easy way to solve sudokus that we don't know about yet.
I'm using sudokus as an example here, but it could be a generic problem. There's also a certain formalism about what "easy" means but I won't get into it further, it is a rather complicated area.
Interestingly, it involves formal languages a lot, which is funny as you wouldn't think computer science and linguistics have a lot in common, but they do in a lot of ways actually.
For the purposes of OPs problem (P v NP), it considers not particular solutions, but general algorithmic approaches. Thus, we consider things as either Hard (exponential time, by size of input), or Easy (only polynomial time, by size of input).
A number of important problems fall into this general class of Hard problems: Sudoku, Traveling Salesman, Bin Packing, etc. These all have initial setups where solving them takes exponential time.
On the other hand, as an example of an easy problem, consider sorting a list of numbers. It's really easy to determine if a lost is sorted, and it's always relatively fast/easy to sort the list, no matter what setup it had initially.
Most people would probably intuitively answer "no", and most computer scientists agree, but this has still not been proven, so we actually don't know.
I disagree, I think most computer scientists believe that P != NP, at least when it comes to classical computers. If we believed that P = NP, then why would we bother with encryption?
I think you've misunderstood 😅. Answering "no" to that question corresponds to P != NP (there are problems that are easy to verify but not easy to solve), while "yes" means P = NP (if a solution is easy to check, the problem must be easy to solve). So I am saying most people and most scientists believe P != NP exactly as you say.
I do other audits, mostly safety and environmental, and my big question is usually "nobody made you write this, why would you write this down if you don't want to do it?"
Mostly cybersecurity strugles. If you invest millons in a castle with a gigantic lock and a pit full of piranas, would you leave the service entrance open and give everyone in town the key? Yeah, more commom than not.
But an IT audit is only necessary if your company goes public or is the owner wants it, maybe if you are a tech company.
I'm only a professional scientist in the loosest sense of the term but for years we've tried to figure out why Joe can't leave the break room to fart and who the fuck does he think he is?
How to get supervisors, superintendents, school boards, and even politicians to let teachers teach. It’s understood that overtesting reduces learning. It’s understood that rigid curriculums don’t work, and you really should be tailoring lessons to the capabilities of the class. All
kinds of educational philosophy is understood well and in depth… but being permitted to apply any of it?
As someone who does hiring for tech, the problem is things are metric driven. You can't extract metrics from letting teachers "teach their own way" without standardized tests, and if you don't have metrics, you don't know if "teaching their own way" is working in practice (you can extend this logic down to understand the rigid ciriculums).
By the way, I think this is all bullshit, but that's why
Oh yeah, I fully understand why the stupidity happens/happened. I don’t know how to fix it or if it can be fixed… that’s why I posted it here, in the unsolved problems in your field thread!
I watched two twelve-year-old children take a four-hour reading exam today. They ran out of time without finishing. Please can North Carolina to get their metrics some other way.
My current theory is that the state of NC so wants to say that public schools are failing that they are giving students near impossible exams.
I have a question about rigid curriculums. This is mostly for high school. Many of my teachers had curriculums and syllabi that they had been using for years and kept them basically the same, and then there were the AP classes where the curriculum was determined by the AP exam. I felt that I learned really well in AP classes and we would get through much more advanced material in the AP classes than in others. And I also felt that the teachers who had developed somewhat fixed curriculums from experience taught much more efficiently than those who hadn't. It never felt like the teachers were changing their curriculum for each class whether it was an AP class or not because most had their curriculums kind of figured out over the course of teaching for many years. And most of the teachers I had in high school were excellent. So my question is, why is it believed that rigid curriculums don't work? Because in my schooling experience, whether the rigid curriculum was developed by the individual teacher or by an external organization (like AP), the class seemed to benefit from having fixed goals for the year.
Probably not the most complex, but in programming, the salesman problem: intuitive for humans, really tough for programming. It highlights how sophisticated our brains are with certain tasks, and what we take for granted.
I once accidentally worked myself into trying to solve the traveling salesman problem. I was doing some work on a very specific problem, and I got to a point where I couldn't figure out a way to efficiently link up a bunch of points. The funny thing is that I knew about the TSP, but I just didn't realize that the problem I was trying to solve was a case of the TSP. After a couple of days trying to figure it out, I realized what it was, and that it was futile.
It was a good lesson to always try to find the most abstracted version of the problem you are trying to solve cause someone smarter has either tried and failed or tried and succeeded.
My field of expertise is bacterial pathogenesis with a particular interest in pneumococcal pneumonia.
And it's true, immunology is ridiculously complex that no one person can ever hope to fully understand it. Immune cells are helpful or detrimental depending on the context, and sometimes even both. And we don't really fully know why. The problem is that pathogens and humans have been in an evolutionary arms race for billions of years, and unraveling all of that evolutionary technical debt is Fun™
To give an example, Toll-like receptors are one of the most important pathogen-detection mechanisms, and they were discovered just about 25 years ago and people only really figured out their importance about 20 years ago. There are researchers who have spent the majority of their careers before the discovery of one of the most crucial immune pathways.
We really don't know what's going on with immunology and to say otherwise is, as I've said, an outright lie. People seem to overestimate how much we know about the immune system, not knowing that we are still very much in the "baby phase" of immune research. The fact that we are able to do so much already is really kind of a testament to human ingenuity than anything
My personal experience is that people who claim to know completely about how the immune system works is more likely to be a science denier (or more likely, naive)
I feel inappropriate near all the very universal questions here, but as a paleontologist specialised in some reptilian groups, the question would probably be "where the fuck do turtles come from?!"
The thing is that fossil evidence points to different answers when compared to genetic evidence, and thez separated long enough from other extant groups that we keep on having new "definitive" answers every year
Genomics makes this answerable though? It's just a matter of whether DNA is preserved or not in fossils. Genomics is more reliable than comparive anatomy.
Comparative genomics can accurately place turtles in animal phylogeny.
Sorry if I misunderstood your post. Or am I wrong here?
In phylogeny, genomic is just another tool. The point is that turtles are os course animals, but they do branch off of different reptile groups if you look at morphological evidence (which includes fossil data) or at molecular (genetic) evidence (which only includes extant species). This is not something frequent, as usually molecular evidence tends to strengthen previous morphologically established evolutionary relationships. And even though molecularists are more numerous today, their methods are neither better or worse than anatomy.
Phylogeny is not as straightforward as some people make it seem, and especially molecular phylogeny tends to rely on abstract concepts that can't always be backed up by biological evidence (I'm not saying it's wrong, it's very often very good, juste that a lot of people doing it do not understand the way it works, and thus can't examine the process critically).
And so turtles' origin are still very much an active debate!
When I was a graduate student, I studied magnetism in massive stars. Lower mass stars (like our sun) demonstrate convection in their outermost layers, which creates turbulent magnetic fields. About 1 in 10 higher mass stars (more than ~8x the mass of the sun) host magnetic fields that are strong and very stable. These stars do not have convection in their outer layers (and thus can’t generate magnetic fields in the same fashion as the sun), and it is thought that these fields are formed very early in the star’s life. Despite much effort, we haven’t really figured out how that happens.
As someone on the outskirts of Data Science, probably something along the lines of "Just what the fuck does my customer actually need?"
You can't throw buzzwords and a poorly labeled spreadsheet at me and expect me to go deep diving into a trashheap of data to magically pull a reasonable answer. "Average" has no meaning if you don't give me anything to average over. I can't tell you what nobody has ever recorded anywhere, because we don't have any telepathic interfaces (and probably would get in trouble with the worker's council if we tried to get one).
I'm sure there are many interesting questions to be debated in this field, but on the practical side, humans remain the greatest mystery.
As a software engineering researcher, I strongly agree. SE research has studied code comprehension for more than 40 years, but for that amount of time, we know surprisingly little about what makes really high-quality code. We are decent in saying what makes very bad code, though, but beyond extreme cases, it's hard to come to fairly general statements.
we become programmers because we lack creativity. my brain short circuits when i have to come up with something other than "foo", "bar", or maybe even "baz"
I have the opposite problem, my variables are sometimes too descriptive. I even annoy myself at times with VariableThatDoesThisOneThing and VariableThatDoesDifferentThing just because I want to be able to come back later and not wonder what I was smoking.
My brother works in molecular biology; he tells me the field’s understanding of peptides have only just begun and it’s only through machine learning that they are now starting to make progress. 99% seem to be post-translational garbage, the other 1% is likely to be the basis of a revolution of treatment options.
I work in computational biophysics. The field has been slowly chipping away at the structure and function of every protein for decades (it's a solvable problem, it's just going to take a lot of time and energy) and recently a bunch of clueless SF tech bros have bumbled their way into the field and declared that they've solved everything.
Yeah, I get the same impression from my brother; he’s active on the science side of the field (recently published in Nature Communications about AI and peptides) and his pet hate is Kurzweil and their ilk.
Super interesting! I watched an explainer last night about a theory that consciousness arises from space-time collapse quantum wave functions in microtubules.
The vast majority went straight over my head but the host stated that the theory was seen as completely insane by their peers and just recently it’s gaining credibility because of some new research in the past few weeks.
Sounds like they really don't want their lives to be deterministic. I'm skeptical of anyone who jumps to quantum mechanics to explain consciousness. Would love to know what research you are referencing.
Is this theory what you're referring to? Just curious because it always seemed interesting to me but I'm not educated enough to even know how to approach the subject beyond going, "huh neat."
How to accurately estimate signal crosstalk and power delivery performance without FEM/MoM simulators.
For people and companies that can't afford 25k-300k per year in licence and compute costs, there is yet to be a good standard way to estimate EM performance. Not to mention dedicated simulation machines needed.
That's why these companies can charge so damn much. The systems are so complex that making a ton of assumptions to pump out some things by hand or with bulk circuit simulators often doesn't even get close to real world performance.
If someone figured out an accurate method without those simulations, the industry could also save a shit ton of compute power and time.