Jump to content

Reccomendations please.


Recommended Posts

Hi again, been a while.

 

So as it happens, I've been watching some youtube vids in my spare time, and I'm being drawn recently to lectures and debates on the sciences, metaphysics and politics.

 

So, I've watched pretty much every Carl Sagan, Neil DeGrasse Tyson, Richard Dawkins, Sam Harris, Michio Kaku, Penn Jilette, Bill Maher, Stephen Colbert, Jon Stewart and Christopher Hitchens videos as I can find.

 

So now I'd be elated if anyone here could offer me any suggestions of speakers or commentators that are equally as witty, fun and mentally stimulating. And please, no stand up comedy, I'm looking for actual thought out commentary or satire, not punchlines.

Link to comment
Share on other sites

It's really more a historical perspective of scientific development, but the Connections series with James Burke is an incredibly excellent general overview of the history of the development of a modern scientific society. He's also done The Day the Universe Changed, which is slightly more technical and more narrowly focused on science than Connections is, but I feel like it's not quite as good.

 

I recommend starting with Connections 1, then going on to DTUC, and then seeing Connections 2&3 if you're still interested. 1 is worth a look regardless, though. It's probably my favorite documentary type thing ever.

Link to comment
Share on other sites

There are a lot of excellent speakers on Youtube in this field, too. For example, Cristina Rad does some really good and witty stuff.

 

Since you mentioned Carl Sagan, I'm guessing you already saw all of Cosmos, but it's worth mentioning again specifically. Awesome stuff.

 

Other than that I've probably watched less than you have; the commentators I follow are mostly bloggers rather than speakers.

Link to comment
Share on other sites

Originally Posted By: Lilith
Originally Posted By: Polaran

Since you mentioned Carl Sagan, I'm guessing you already saw all of Cosmos, but it's worth mentioning again specifically. Awesome stuff.


don't forget about
of his work


While we're on remixes of Sagan: Symphony of Science. I know autotune irks a lot of people, but some of this stuff is simply awesome.
Link to comment
Share on other sites

Thanks a lot guys.

 

Yea, I've watched a lot of Cosmos, and while the parodies are funny, they're not really all that informative.

 

I've looked at quite a few TED talks even before I asked this of you, and while some of them are quite good, most are just not what I'm looking for.

 

James Burke is pretty interesting, just watched a couple Connections episodes. It seems a bit dated, but so was Cosmos, so it's all good.

 

I wasn't much impressed with Cristina Rad, I think other atheist video bloggers such as Thunderf00t and AronRa do a much better job of presenting the case. Though I think that may be due in part because I have almost as hard a time listening to her speak as I do Stephen Hawking.

 

Richard Feynman was very fun to watch. He is an excellent lecturer and can make pretty much anything interesting. In what way is he too Popperian? That he demands falsifiability? That he suggests we employ critical rationalism? I'm shocked that someone had to suggest these as processes in the scientific method. They seem to me to be obvious, though that may come from my learning the scientific method only after these were accepted into it as canon as it were.

 

David Pakman I honestly didn't even bother listening to the podcast, but out of curiosity looked him up on youtube and realized that I had already watched a couple of his shows, specifically when Neil DeGrasse Tyson was interviewed.

 

Thanks all, you really gave me some very good material, some interesting new stuff to think on. If anyone has anything to add, please don't hesitate.

Link to comment
Share on other sites

Originally Posted By: Radix Malorum Est Cupiditas

Richard Feynman was very fun to watch. He is an excellent lecturer and can make pretty much anything interesting. In what way is he too Popperian? That he demands falsifiability? That he suggests we employ critical rationalism? I'm shocked that someone had to suggest these as processes in the scientific method. They seem to me to be obvious, though that may come from my learning the scientific method only after these were accepted into it as canon as it were.


the problem with falsifiability is that it's a mirage. no matter what evidence people find to "falsify" a hypothesis, you can always explain it away or make an ad-hoc modification to your hypothesis. in fact, if you have an otherwise compelling theory that explains a bunch of stuff and makes useful predictions, sometimes doing one of those things is the sensible thing to do. falsificationism lacks adequate criteria for judging when and in what ways it's legitimate to adjust a theory in the face of contradictory evidence; as such, it's prone to throwing the baby out with the bathwater
Link to comment
Share on other sites

Originally Posted By: Lilith

the problem with falsifiability is that it's a mirage. no matter what evidence people find to "falsify" a hypothesis, you can always explain it away or make an ad-hoc modification to your hypothesis. in fact, if you have an otherwise compelling theory that explains a bunch of stuff and makes useful predictions, sometimes doing one of those things is the sensible thing to do. falsificationism lacks adequate criteria for judging when and in what ways it's legitimate to adjust a theory in the face of contradictory evidence; as such, it's prone to throwing the baby out with the bathwater


An "ad-hoc modification to your hypothesis" can also be viewed as an improvement to the hypothesis to better reflect the current state of knowledge. And while I agree that making useful predictions is a good thing, using it as the only basis for judging a hypothesis can have its pitfalls. For one thing, it makes it very difficult to compare the validity of competing hypotheses.

To give an example, researchers have been measuring gene expression in tumors from cancer patients, with one of the goals being to find prognostic gene expression signatures to be able to predict which patients have rapidly growing tumors which might be suitable for more aggressive treatment regimens. This is certainly a laudable goal. This has led to a cottage industry of gene expression signature papers where each group working on the problem takes the same set of data, finds a prognostic signature, and proves that their method of finding the signature is good and useful by being able to predict patient survival based on the signature. The problem is that all of the published signatures have very little in common with each other, and one also observes that when people have tried to validate these signatures on independent sets of patients, the results have generally been disappointing.

Last year a group did an analysis based on a very commonly used data set to make these predictions, and found that even randomly selected sets of genes used as a signature could be used to predict patient survival, and that the majority of published signatures did no better than a random selection. If the reviewers had demanded falsifiability in these cases, the worse performing of the signatures would have been rejected before publication, which probably would have been a good thing.

If anyone is interested, here is a link to the article:

http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002240
Link to comment
Share on other sites

I get the complaints about falsifiability, but the big question is whether or not there's a 'better' approach to use.

 

Originally Posted By: darint
To give an example, researchers have been measuring gene expression in tumors from cancer patients, with one of the goals being to find prognostic gene expression signatures to be able to predict which patients have rapidly growing tumors which might be suitable for more aggressive treatment regimens. This is certainly a laudable goal. This has led to a cottage industry of gene expression signature papers where each group working on the problem takes the same set of data, finds a prognostic signature, and proves that their method of finding the signature is good and useful by being able to predict patient survival based on the signature. The problem is that all of the published signatures have very little in common with each other, and one also observes that when people have tried to validate these signatures on independent sets of patients, the results have generally been disappointing.

 

Last year a group did an analysis based on a very commonly used data set to make these predictions, and found that even randomly selected sets of genes used as a signature could be used to predict patient survival, and that the majority of published signatures did no better than a random selection. If the reviewers had demanded falsifiability in these cases, the worse performing of the signatures would have been rejected before publication, which probably would have been a good thing.

Huh. I don't know genetics or medicine or anything like that, but I do know machine learning, and you're making it sound like the researchers are testing on the training set (or, slightly worse, always using the same testing set). If that's the case, then of course overfitting is going to happen, and they need to do something to mitigate it. I'd have to actually read the paper to see if that's the case, though.

 

(To be fair, the researchers likely have to deal with a very small sample size with a high number of features, and it's very, very tough to learn good models in such situations.)

Link to comment
Share on other sites

I think there's a misrepresentation of falsifiability here. The question isn't whether or not a hypothesis can be rescued from falsification, but whether there are conditions under which the hypothesis, as formulated, can be found false. If data are found that contradict a hypothesis and the hypothesis revised and found to be wrong again and touched up a little bit more, it'll usually reach the point where its proponents give up on it or everyone else does.

 

—Alorael, who notes that proving a theory false doesn't even lead to rejection of that theory immediately. If it's been found useful and verifiable enough to make it to theory status, some newly discovered or observed or considered case where the theory doesn't hold doesn't make it a bad theory. It means there's something missing. It's the "theories" that have no falsification condition that are pseudoscience.

Link to comment
Share on other sites

Originally Posted By: darint
Last year a group did an analysis based on a very commonly used data set to make these predictions, and found that even randomly selected sets of genes used as a signature could be used to predict patient survival, and that the majority of published signatures did no better than a random selection. If the reviewers had demanded falsifiability in these cases, the worse performing of the signatures would have been rejected before publication, which probably would have been a good thing.


in what sense would "demanding falsifiability" have helped in that case? the problem was never that the claim "these genes are sufficiently correlated with breast cancer progression that they can be used to predict patient survival" was unfalsifiable, the problem was that the sample sizes being used in the studies were too small to be representative of the population.

the first signature found as a marker for cancer prognosis probably won't be the best possible one in any case, but given that we don't have an infinite amount of time to find the best answer, anything that works is better than nothing. if the published signatures had actually worked to predict patient survival, it wouldn't matter what proportion of random signatures performed just as well as them. (of course, it does matter if you're attempting to divine some kind of information about the biology of breast cancer from the performance of the signature, but that's a separate issue)

Originally Posted By: Wagon Train to the Sea
I think there's a misrepresentation of falsifiability here. The question isn't whether or not a hypothesis can be rescued from falsification, but whether there are conditions under which the hypothesis, as formulated, can be found false. If data are found that contradict a hypothesis and the hypothesis revised and found to be wrong again and touched up a little bit more, it'll usually reach the point where its proponents give up on it or everyone else does.


you've pretty much just summarised half of lakatos' model of progressive and degenerating research programs, which is a substantial improvement on popper. the key question is whether or not the cruft you add to salvage your theory from contradictory evidence itself produces new predictions that are supported by later experimental evidence, or whether it is indeed nothing but cruft

also, talking about "the hypothesis as formulated" implies that all science is about testing a pre-conceived hypothesis, which is a common public misconception about science and in practice is just silly. there's a great deal of exploratory research that really has nothing to do with hypothesis testing, because there isn't yet enough information to formulate a specific hypothesis to test. that's still science
Link to comment
Share on other sites

Originally Posted By: Dintiradan
I get the complaints about falsifiability, but the big question is whether or not there's a 'better' approach to use.

Originally Posted By: darint
To give an example, researchers have been measuring gene expression in tumors from cancer patients, with one of the goals being to find prognostic gene expression signatures to be able to predict which patients have rapidly growing tumors which might be suitable for more aggressive treatment regimens. This is certainly a laudable goal. This has led to a cottage industry of gene expression signature papers where each group working on the problem takes the same set of data, finds a prognostic signature, and proves that their method of finding the signature is good and useful by being able to predict patient survival based on the signature. The problem is that all of the published signatures have very little in common with each other, and one also observes that when people have tried to validate these signatures on independent sets of patients, the results have generally been disappointing.

Last year a group did an analysis based on a very commonly used data set to make these predictions, and found that even randomly selected sets of genes used as a signature could be used to predict patient survival, and that the majority of published signatures did no better than a random selection. If the reviewers had demanded falsifiability in these cases, the worse performing of the signatures would have been rejected before publication, which probably would have been a good thing.
Huh. I don't know genetics or medicine or anything like that, but I do know machine learning, and you're making it sound like the researchers are testing on the training set (or, slightly worse, always using the same testing set). If that's the case, then of course overfitting is going to happen, and they need to do something to mitigate it. I'd have to actually read the paper to see if that's the case, though.

(To be fair, the researchers likely have to deal with a very small sample size with a high number of features, and it's very, very tough to learn good models in such situations.)


The original signatures came from 47 independent publications, so it is a bit hard to generalize. But many attempt to do some sort of cross-validation, even if it is the rather pathetic "leave-one-out" style that is hardly any better than not doing any at all. Some attempt to do some independent validation, but it is usually on very small sample sizes. And one is never sure if they had several signatures that they found originally and only published the one that independently verified on their small reproducibility set. This happens more often than you would think.
Link to comment
Share on other sites

Science is also begging people for funding, rare but highly publicized lucky accidents that lead to breakthroughs, and a lot of time spent spinning your wheels on experiments that aren't really all that useful to either get reviewer #2 to stop whining or because you're out of ideas for how to get past that big hurdle in your work. All of these are science, but I don't think Popper was concerned with the process of science so much as the philosophy of scientific thinking/ideas/theories.

 

—Alorael, who recommends Bruno Latour if you want to argue about the gritty details of science in action, which is also the title of one of the man's books. At the very least there's plenty there to get annoyed, baffled, or both about.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...