The thought that is the result of active thinking is always new and original; original, not necessarily in the sense that others have not thought it before, but always in the sense that the person who thinks, has used thinking as a tool to discover something new in the world outside or inside himself. Rationalisations are essentially lacking in this quality of discovering and uncovering; they only confirm the emotional prejudice existing in oneself. Rationalising is not a tool for penetration of reality but a post-factum attempt to harmonise one’s own wishes with existing reality.
Showing posts with label philosophical babbling. Show all posts
Showing posts with label philosophical babbling. Show all posts
Monday, 7 June 2010
Thinking...
from Fear of Freedom:
Tuesday, 4 May 2010
Self-interest as a norm
A common response from those who seem to dislike the idea that people aren't rational, self-interested maximisers is to focus on the way that people learn.
For example, I've seen someone somewhere use this as an argument against the endowment effect. Their argument was that, yes, maybe people do put a higher value on something once they own it, but that may change with experience. So second-hand car salesman won't experience the endowment effect in the same way (if at all?) in respect of a car they've just bought at auction compared to a regular punter. The point seems to be that people learn to value things more rationally the more they trade.
That all sounds ok. But I wonder what is really going on here. Is it that rational self-interested 'core' is exposed through repeated transactions? All that cranky irrational stuff gets worn away with experience. Or could it also be that to some extent we internalise a new norm and act in accordance with it?
Clearly I'm not discounting the idea that people do act in a rational and self-interested way. But to what extent is that actually resulting from an internal drive which might be uncovered through experience, and to what extent is it a norm that people behave in line with?
I reckon there's something worth thinking about here, and having googled 'Self-interest as a norm' I found this paper, which looks like it might be worth a read.
UPDATE: That paper is well worth a read. Lots of interesting stuff and useful refs in there.
For example, I've seen someone somewhere use this as an argument against the endowment effect. Their argument was that, yes, maybe people do put a higher value on something once they own it, but that may change with experience. So second-hand car salesman won't experience the endowment effect in the same way (if at all?) in respect of a car they've just bought at auction compared to a regular punter. The point seems to be that people learn to value things more rationally the more they trade.
That all sounds ok. But I wonder what is really going on here. Is it that rational self-interested 'core' is exposed through repeated transactions? All that cranky irrational stuff gets worn away with experience. Or could it also be that to some extent we internalise a new norm and act in accordance with it?
Clearly I'm not discounting the idea that people do act in a rational and self-interested way. But to what extent is that actually resulting from an internal drive which might be uncovered through experience, and to what extent is it a norm that people behave in line with?
I reckon there's something worth thinking about here, and having googled 'Self-interest as a norm' I found this paper, which looks like it might be worth a read.
UPDATE: That paper is well worth a read. Lots of interesting stuff and useful refs in there.
Sunday, 14 February 2010
2 snippets
1. Interesting para at the end of this piece (presume it's from tomorrow's FTFM):
2. Quite liked this bit from 'On the fetish character in music and the regression of listing' by Theodor Adorno:
There is an underlying question in all this about whether the public ownership model for companies still makes sense. Lord Myners was asked at the NAPF event whether voting rights should be transferred to employees. He replied mutual ownership was an interesting idea, as there were some real disadvantages of the public company model.
2. Quite liked this bit from 'On the fetish character in music and the regression of listing' by Theodor Adorno:
"If one seeks to find out who 'likes' a commercial piece [of music], one cannot avoid the suspicion that liking and disliking are inappropriate to the situation, even if the person in question clothes his reactions in those words. The familiarity of a piece is a surrogate for the quality ascribed to it. To like it is almost the same thing as to recognise it. An approach in terms of value judgements has become a fiction for the person who finds himself hemmed in by standardised musical goods."I don't think I buy the general approach, but he certainly hits a few targets. I also think familiarity has an important influence of the popularity & legitimacy of ideas.
Sunday, 21 June 2009
Familiar ideas
A quick snippet. Last year I blogged a little bit about the legitimation of ideas (ie by what process an idea comes to be accepted or rejected as valid). A couple of bits on this here and here. Based on the totally non-scientific way I think my own brain operates I believe I (and probably others) have an inherent tendency to try and 'subdivide' new information by reference to concepts I already have in place.
In a stack of books I ordered recently I took a punt on this one. It turns out that I might not be miles off the mark. Berns argues that our brains are constantly trying to act as efficiently as possible, and once something is familiar (or appears to be so) they expend less effort on it. Once your brain has come across an item half a dozen times, the level of brain activity detected when being presented with it again is roughly half what it would be on first viewing. Berns argues that because our brains are seeking to be efficient we do indeed instinctively look for the familiar. It's less effort. He argues that this is why optical illusions 'work', even though we may 'know' that we are being presented with an illusion.
Ho hum.
In a stack of books I ordered recently I took a punt on this one. It turns out that I might not be miles off the mark. Berns argues that our brains are constantly trying to act as efficiently as possible, and once something is familiar (or appears to be so) they expend less effort on it. Once your brain has come across an item half a dozen times, the level of brain activity detected when being presented with it again is roughly half what it would be on first viewing. Berns argues that because our brains are seeking to be efficient we do indeed instinctively look for the familiar. It's less effort. He argues that this is why optical illusions 'work', even though we may 'know' that we are being presented with an illusion.
Ho hum.
Friday, 12 June 2009
My kind of blah
From Risk, Uncertainty and Profit:
"Every event has an infinite number of causes, and it depends on circumstances, the point of view, the problem in had, which of these we single out as 'The' cause. 'The' cause of of a phenomenon is merely that one of its necessary conditions which is for some practical reason crucial, generally from the standpoint of control. It is the one about which we must concern ourselves, the circumstances enabling us to take the others for granted. It may be quite correct to name a dozen different antecedents as 'the' cause of a particular occurrence, according to the point of view."
Tuesday, 26 May 2009
The BNP, the Left and categorisation
Tom has an interesting post up about that old chesnut about the BNP being left-wing (he also has an utterly ace kittens video on his blog!).
It made me think about the point that some Righties make about how most BNP support comes is areas that have previously been Labour-voting. I'm genuinely not sure what this is supposed to mean. I mean there's sort of an implication that more working class areas are somehow inherently left-wing. So I guess the argument is that if such an area provides a lot of BNP votes therefore the BNP must be left-wing in order to gain their support. But then what if a formely Labour seat is taken by the Tories, does that mean that the Tories are left-wing? It doesn't feel right as an argument somehow.
To be fair, Hayek does an alright job in the Road to Serfdom of tracing an intellectual line between 'socialism' and the Nazis. There is quite a lot of material provided from German socialists who argued that Germany embodied socialism and so was preferable to other more economically 'laissez-faire' countries, like Britain. I still think the vast bulk of the socialist movement had very little meaningful overlap with the Nazis. There was some overlap, both intellectually and in terms of people moving from one camp to the other, but I don't think it was significant. (I also don't buy Hayek's general argument about socialism leading to totalitarianism, but that's another story).
That it turn got me thinking about how we define what is real, 'legitimate' socialism, or any other -ism, and what isn't. In my mind 'real' socialism can't overlap with Nazism because core principles - ie views of equality - are fundamentnally in conflict. But why is my view of socialism any more legitimate than those socialists Hayek identified as arguing that Germany embodied the idea, and as such must triumph in war against its enemies in order to further the cause?
Are the core features of socialism those set out in various works of socialist theory, or can they be found in the policies that are adopted by parties and governments that call themselves socialist? Or should we make assessments based on the outcomes of the policies that are adopted - a tool many use to 'prove' Labour's lack of progressiveness for example. As I've said before, whichever categorisation system you pick will (to a greater or lesser extent) define your answer for you. (And to be honest I suspect that some Righties who really aren't Hayekian adopt his analysis because it allows them to designate the BNP as being on the Left).
Ultimately this one can't really be solved. If we adopt different classification systems then often we will end up with very different aswers from the same info. And we can't overlook our tendency to choose a categorisation system that enables us to reach an answer we favour.
The only thing I can see to fall back is reasonableness - and perhaps I'm reaching for a comfortable categorisation system myself. So, for example, is it 'reasonable' to class the BNP as left-wing, knowing all that we know about Left and Right, and the extremes on both sides, and how they behave? Is it a definition that people would accept in ordinary useage, as opposed to a theoretical classification argument? On this basis, calling them left-wing again just doesn't feel right. In common with the other Tom I think the clear racist element of the BNP's programme is too much of hurdle to get over. Not very scientific I know, but in light of the problems in reaching a definitive answer from a theoretical point of view, is there a better way to answer this one?
It made me think about the point that some Righties make about how most BNP support comes is areas that have previously been Labour-voting. I'm genuinely not sure what this is supposed to mean. I mean there's sort of an implication that more working class areas are somehow inherently left-wing. So I guess the argument is that if such an area provides a lot of BNP votes therefore the BNP must be left-wing in order to gain their support. But then what if a formely Labour seat is taken by the Tories, does that mean that the Tories are left-wing? It doesn't feel right as an argument somehow.
To be fair, Hayek does an alright job in the Road to Serfdom of tracing an intellectual line between 'socialism' and the Nazis. There is quite a lot of material provided from German socialists who argued that Germany embodied socialism and so was preferable to other more economically 'laissez-faire' countries, like Britain. I still think the vast bulk of the socialist movement had very little meaningful overlap with the Nazis. There was some overlap, both intellectually and in terms of people moving from one camp to the other, but I don't think it was significant. (I also don't buy Hayek's general argument about socialism leading to totalitarianism, but that's another story).
That it turn got me thinking about how we define what is real, 'legitimate' socialism, or any other -ism, and what isn't. In my mind 'real' socialism can't overlap with Nazism because core principles - ie views of equality - are fundamentnally in conflict. But why is my view of socialism any more legitimate than those socialists Hayek identified as arguing that Germany embodied the idea, and as such must triumph in war against its enemies in order to further the cause?
Are the core features of socialism those set out in various works of socialist theory, or can they be found in the policies that are adopted by parties and governments that call themselves socialist? Or should we make assessments based on the outcomes of the policies that are adopted - a tool many use to 'prove' Labour's lack of progressiveness for example. As I've said before, whichever categorisation system you pick will (to a greater or lesser extent) define your answer for you. (And to be honest I suspect that some Righties who really aren't Hayekian adopt his analysis because it allows them to designate the BNP as being on the Left).
Ultimately this one can't really be solved. If we adopt different classification systems then often we will end up with very different aswers from the same info. And we can't overlook our tendency to choose a categorisation system that enables us to reach an answer we favour.
The only thing I can see to fall back is reasonableness - and perhaps I'm reaching for a comfortable categorisation system myself. So, for example, is it 'reasonable' to class the BNP as left-wing, knowing all that we know about Left and Right, and the extremes on both sides, and how they behave? Is it a definition that people would accept in ordinary useage, as opposed to a theoretical classification argument? On this basis, calling them left-wing again just doesn't feel right. In common with the other Tom I think the clear racist element of the BNP's programme is too much of hurdle to get over. Not very scientific I know, but in light of the problems in reaching a definitive answer from a theoretical point of view, is there a better way to answer this one?
Tuesday, 28 October 2008
Every time I learn something new it pushes some old stuff out of my brain
So says Homer in one episode of The Simpsons. But maybe Homer actually "misunderestimated" (Copyright G W Bush) himself by falling for the metaphor of the mind as a container. It's a very tempting metaphor, because it seems to get across the way that we access and 'store' new information. Here's a thumbnail sketch of the idea from the essay 'Is there a problem in explaining cognitive progress?' by Aaron Ben-Ze'ev in Rethinking Knowledge: Reflections Across the Disciplines.
"In this view, the mind is an internal container, and cognitive progress is a quantitative increase in the amount of internal representations. In such a mechanistic paradigm, the cognitive system remains more or less stable, the only difference being that its empty shelves are gradually filled with more information. Cognitive progress is attained by adding a certain part to an existing system. When this mechanistic picture is applied to to the realm of scientific knowledge, science is conceived of as essentially taking pictures of the external world; the more pictures science has, the more adequate the science is. Hence there is always linear progression. Both the individual person and science as a whole are constantly marching toward a better understanding of their surroundings."
But is this really how our minds work? In the essay he suggests adopting a different approach - the schema paradigm. In this view the mind is made up of capacities and states. In the container metaphor, when you want to recall information presumably you send the little bloke in your head off into the information warehouse to retrieve what is required. In the schema paradigm, however, the container metaphor doesn't work because key elements - capacities and states - are retained rather than stored.
"The capacity to play the piano and the state of being beautiful are retained but not stored. Similarly, capabilities are not brought out of storage but are realised or actualised. The state of a car in motion is not stored in its engine when the car is stationary; rather the car has the capacity to repeat its state of being motion. And by the same token, when a squeaky toy does not actually squeak, it retains (rather than stores) its capacity to squeak."
This, he argues, also explains something about the organisation of the brain:
"In a storehouse, it makes very little difference how the items are disposed or organised. Something may be stored at the right or left side of the storehouse without being affected. However, in the schema paradigm, organisation is an essentia property, not a later addition. The importance of organisation and relations in memory can, for instance, explain that it is much harder to recall the months of the year in alphabetical order than in their chronologial sequence. A junkyard or tapre recorder model of memory is feasible and even natural in the container paradigm, whereas the schema paradigm stresses the importance of the relations and organisation among the carious items. Many phenomena indicating the sensitivity of memory to organisation attest to the greater suitability of the schema than the container paradigm for memory."
Notably George Lakoff (yes, him again) has been here already, and has identified the container metaphor as one the most prevalent. You can also see that it crops in other areas - what about set theory in maths for example? But as a way of understanding how we learn, perhaps it simply doesn't work. Sorry Homer.
"In this view, the mind is an internal container, and cognitive progress is a quantitative increase in the amount of internal representations. In such a mechanistic paradigm, the cognitive system remains more or less stable, the only difference being that its empty shelves are gradually filled with more information. Cognitive progress is attained by adding a certain part to an existing system. When this mechanistic picture is applied to to the realm of scientific knowledge, science is conceived of as essentially taking pictures of the external world; the more pictures science has, the more adequate the science is. Hence there is always linear progression. Both the individual person and science as a whole are constantly marching toward a better understanding of their surroundings."
But is this really how our minds work? In the essay he suggests adopting a different approach - the schema paradigm. In this view the mind is made up of capacities and states. In the container metaphor, when you want to recall information presumably you send the little bloke in your head off into the information warehouse to retrieve what is required. In the schema paradigm, however, the container metaphor doesn't work because key elements - capacities and states - are retained rather than stored.
"The capacity to play the piano and the state of being beautiful are retained but not stored. Similarly, capabilities are not brought out of storage but are realised or actualised. The state of a car in motion is not stored in its engine when the car is stationary; rather the car has the capacity to repeat its state of being motion. And by the same token, when a squeaky toy does not actually squeak, it retains (rather than stores) its capacity to squeak."
This, he argues, also explains something about the organisation of the brain:
"In a storehouse, it makes very little difference how the items are disposed or organised. Something may be stored at the right or left side of the storehouse without being affected. However, in the schema paradigm, organisation is an essentia property, not a later addition. The importance of organisation and relations in memory can, for instance, explain that it is much harder to recall the months of the year in alphabetical order than in their chronologial sequence. A junkyard or tapre recorder model of memory is feasible and even natural in the container paradigm, whereas the schema paradigm stresses the importance of the relations and organisation among the carious items. Many phenomena indicating the sensitivity of memory to organisation attest to the greater suitability of the schema than the container paradigm for memory."
Notably George Lakoff (yes, him again) has been here already, and has identified the container metaphor as one the most prevalent. You can also see that it crops in other areas - what about set theory in maths for example? But as a way of understanding how we learn, perhaps it simply doesn't work. Sorry Homer.
Friday, 19 September 2008
Political views, brains and bodies
This is interesting (nicked from Paulie). So if you buy that political views have a basis in visceral reactions, and that conceptual metaphors are grounded in physical experiences, Lakoff's idea that Lefties and Righties have fundamentally different metaphors in respect of politics makes sense. The key question is to what extent are our reactions innate, and to what extent learned.
I've just finished reading Descartes' Error by Antonio Damasio which covers some of this stuff. Complicated area obviously as we have different brain systems dealing with different factors (and Damasio's key point is that it's fundamentally enmeshed with the body). He does suggest that whilst we are born with a lot already in place, our emotions and feelings (which he distinquishes between) affect the development of other reactions. So in effect we are a 'work in progress' until we die. That makes me think a) that political views can be altered and b) a significant visceral experience might be expected to have a big impact on views. Liberals who get mugged are one example, but equally the shared endeavour of WWII shaped politics for decades afterwards.
Ho hum.
I've just finished reading Descartes' Error by Antonio Damasio which covers some of this stuff. Complicated area obviously as we have different brain systems dealing with different factors (and Damasio's key point is that it's fundamentally enmeshed with the body). He does suggest that whilst we are born with a lot already in place, our emotions and feelings (which he distinquishes between) affect the development of other reactions. So in effect we are a 'work in progress' until we die. That makes me think a) that political views can be altered and b) a significant visceral experience might be expected to have a big impact on views. Liberals who get mugged are one example, but equally the shared endeavour of WWII shaped politics for decades afterwards.
Ho hum.
Friday, 5 September 2008
Categorise this!
I'm still plugging away at this Prototype Theory stuff. One interesting idea is that it even affects our views of causation. Apparently we view the sort of billiard ball model of causation (ie A causes B to C) as a 'better' version of causality than, say, change resulting from the confluence of a number of factors. I spose that might in part explain why we are drawn to simplistic explanations for events. Excuse my penions geekery but an obvious case in my mind is the idea that the abolition of dividend tax credits 'caused' the closure of final salary schemes.
In similar territory I've just come across a good example of how the way we categorise things depends a great deal of conceptual models we already have in place. The example is to think of the category 'bachelor'. I assume we all have a clear view of what this category means - a man who is not married. It also seems fairly clear that as a category you are either in it or not - it's not a gradiated category. But what about a boy lost in a jungle who grows up into an adult on his own - is he a bachelor? What about the pope? What about a gay bloke in long-term relationship?
In fact whilst initially the designation 'bachelor' seems like a very straightforward yes/no bit of categorisation it is actually built on other assumptions, for example about marriage.
In similar territory I've just come across a good example of how the way we categorise things depends a great deal of conceptual models we already have in place. The example is to think of the category 'bachelor'. I assume we all have a clear view of what this category means - a man who is not married. It also seems fairly clear that as a category you are either in it or not - it's not a gradiated category. But what about a boy lost in a jungle who grows up into an adult on his own - is he a bachelor? What about the pope? What about a gay bloke in long-term relationship?
In fact whilst initially the designation 'bachelor' seems like a very straightforward yes/no bit of categorisation it is actually built on other assumptions, for example about marriage.
Saturday, 26 July 2008
Metaphors We Live By
This book is ace. It's one of those books that manages to crystallize half-thought-out ideas and insights that you have but never really manage to develop. And once you get your head around the central ideas you can see how applicable these are in many different bits of the world.
Obviously, it all about metaphors, and the early chapters of the book look at the types of metaphors we use and how prevalent they are. This stuff alone is really worth a read just to make yourself aware of just how often we use metaphors, but also how we use many different expressions of the same underlying metaphor. Take the example I posted previously:
Surprising isn't it that we use loads of different expressions based around one metaphor? That leads on to one of the fundamental arguments in the book - that metaphors are not merely linguistic devices, they are conceptual. We don't just use the 'theories are buildings' metaphor to get across our message, we actually think and act in those terms too.
Off on a bit of a tangent I think this may in part explain why something can sound both logical and false at the same time - the communicator has metaphorical coherence, but the metaphor doesn't seem to capture what is being described. To my uncultured mind this also seems to fit together (like a construction...) pretty well with my other favourite view of the world, the narrative paradigm. Fisher suggests that we decide the validity of an argument based on narrative coherence. This obviously has some pretty major implications for our understanding of 'truth', and indeed the latter part of the book covers this in some detail. (I'm not going to go into this now as it goes much more into philosophy).
They also argue that our metaphors are grounded in experience, hence a lot of them are about space, orientation and travel. Think how often you use 'journey' metaphors to describe things, for example. This might be in terms of relationships - we're going our separate ways, the worst is behind us etc - or in terms of work - I personally use the phrase "I'm getting there" a lot in reference to work projects. So really we are perceiving first and describing second in terms of more direct/basic experiences.
The book's afterword is also well worth a read as it describes briefly how metaphor analysis has been applied is various fields from psychology to political science. The latter obviously interests me, and leads me on (it's a journey you see) to try and use this stuff politically. If we buy (and I think I do) the argument that metaphors are conceptual, not merely linguistic, then a) we ought to be able to indentify the metaphors that people are using to understand the current situation and b) we may be able to establish alternatives.
Obviously I have a political bias, but I can't quite reconcile the strength of the rejection of Labour by the punters with what is actually going on in the UK. We have had a decade and a bit of uninterrupted prosperity (alright partially debt-fuelled), with no major worries about inflation or unemployment or all the other big issues of the past. So why does it seem that the voters want to wipe us out at the next election? There must be a way of understanding the world they have developed that we ought to be able to engage with - but what is it and how do we do it?
Obviously, it all about metaphors, and the early chapters of the book look at the types of metaphors we use and how prevalent they are. This stuff alone is really worth a read just to make yourself aware of just how often we use metaphors, but also how we use many different expressions of the same underlying metaphor. Take the example I posted previously:
Theories (and arguments) are buildings:
Is that the foundation for your theory? The theory needs more support. We need some more facts or the argument will fall apart. We need to construct a strong argument for that. I haven't figured out yet what the form of the argument will be. Here are some more facts to shore up the theory. We need to buttress the theory with solid arguments. The theory will stand or fall on the strength of that argument. The argument collapsed. They exploded his latest theory. We will show his theory to be without foundation. So far we have put together only the framework of the theory.
Surprising isn't it that we use loads of different expressions based around one metaphor? That leads on to one of the fundamental arguments in the book - that metaphors are not merely linguistic devices, they are conceptual. We don't just use the 'theories are buildings' metaphor to get across our message, we actually think and act in those terms too.
Off on a bit of a tangent I think this may in part explain why something can sound both logical and false at the same time - the communicator has metaphorical coherence, but the metaphor doesn't seem to capture what is being described. To my uncultured mind this also seems to fit together (like a construction...) pretty well with my other favourite view of the world, the narrative paradigm. Fisher suggests that we decide the validity of an argument based on narrative coherence. This obviously has some pretty major implications for our understanding of 'truth', and indeed the latter part of the book covers this in some detail. (I'm not going to go into this now as it goes much more into philosophy).
They also argue that our metaphors are grounded in experience, hence a lot of them are about space, orientation and travel. Think how often you use 'journey' metaphors to describe things, for example. This might be in terms of relationships - we're going our separate ways, the worst is behind us etc - or in terms of work - I personally use the phrase "I'm getting there" a lot in reference to work projects. So really we are perceiving first and describing second in terms of more direct/basic experiences.
The book's afterword is also well worth a read as it describes briefly how metaphor analysis has been applied is various fields from psychology to political science. The latter obviously interests me, and leads me on (it's a journey you see) to try and use this stuff politically. If we buy (and I think I do) the argument that metaphors are conceptual, not merely linguistic, then a) we ought to be able to indentify the metaphors that people are using to understand the current situation and b) we may be able to establish alternatives.
Obviously I have a political bias, but I can't quite reconcile the strength of the rejection of Labour by the punters with what is actually going on in the UK. We have had a decade and a bit of uninterrupted prosperity (alright partially debt-fuelled), with no major worries about inflation or unemployment or all the other big issues of the past. So why does it seem that the voters want to wipe us out at the next election? There must be a way of understanding the world they have developed that we ought to be able to engage with - but what is it and how do we do it?
Saturday, 12 July 2008
Why do people change their minds?
One of the things that has been interesting me lately is why people change their opinion on a given issue. Partly this stems from my own shifting views, particularly on work-related stuff, and in part also from my interest in political defections. Here's some blurb I wrote about this a few months back:
I think this sort of holds together, but I think it needs fleshing out a bit. What I've been thinking about lately is how those big headline beliefs (ie Left vs Right) sit on top of a stack of smaller ideas (what I'm going to call 'prop' beliefs). Now in order to shift fundamentally from one position to another, and for your new position to be sound, I think you must need those prop beliefs to support your big new idea. But how do you acquire them?
Personally, I have shifted a lot in the direction of letting people get on with things rather than trying to direct them centrally, so I'm much more comfortable with markets (in general) than I have been before. In addition I'm much less confident about the degree of control that is possible in any case. But how have I acquired these views?
In large part it is the result of work-related reading, but I can split this into sub-categories. There are areas that I understand well, where I think what I have done is looked at the empirical evidence (ie investment performance figures) and reached a given conclusion. Then there are the bits that I understand less well, where I think I have largely accepted propositions that sound reasonable (probably because they have narrative rationality) and/or because they come with the social proof of being advocated by 'experts'. So these are the prop beliefs supporting my revised perspectives.
Hopefully that all makes sense so far, but what interests me is which comes first, the big ideas, or the prop beliefs? Intuitively the changes in the prop beliefs ought to affect the bigger beliefs that sit on top of them. But going back to my bit of blurb from a previous post, I wonder whether it doesn't sometimes happen the other way around.
Perhaps we don't always acquire our big reliefs rationally by building on top of smaller established prop beliefs. Perhaps (for whatever reason) we acquire the big belief and then rapidly establish the props in our mind required to support it. That would seem to fit more easily with those cases where people switch completely from one perspective to another. In my own case, have I made up my mind to adopt new beliefs, and am now working on the evidence to support them, or do the prop beliefs come first? What's sauce for the goose and all that... Tricky one eh?
Off on a bit of a tangent, I'm always amazed to read pieces arguing that people are afraid to criticise [insert minority group, default option: muslims] for fear of being seen as prejudiced. I'm particularly amazed when I read such articles in national newspapers that regularly run negative articles about minorities. As far I am concerned it is patently false that people are prevented from speaking out, because they seem to do it all the time.
What is also noticeable is that vehemence and self-righteousness with which these claims are made, and I wonder whether that isn't actually a bit of a clue to what is going on. For whatever reason, many people do feel a bit shameful about laying into minorities. However by instituting the prop belief that they are being censored from saying what they want they are able to utilise this to access a feeling of self-righteousness in 'speaking out'. In this example the prop belief would seem to be used to address cognitive dissonance arising from their desire to say something and their desire not to be seen as insensitive/prejudiced.
Or am I talking rubbish?
I think the desire for narratives also manifests itself when people radically change their political views. It's notable that when some people's politics change they often seem to go through a wholesale change. We can all think of examples of those formerly some way out on the Left who subsequently became thoroughly right-wing. That is of course absolutely their right, but it is surprising that a number of such people seem to shift their view from Left to Right on each and every issue. It's unlikely that any 'side' is the sole repositery for truth, therefore wouldn't we expect to see more people become unaligned (as they realise that the side they had affiliated with is 'wrong' on certain issues) or simply moderate their views, rather than shift from one pole to another? If people are rationally considering each and every issue we might well expect to find this happen, but if we are principally in the business of buying narratives then maybe it's not surprising to see people chuck out one and replace it with a diffrent one. It's a lot easier than thinking through each issue in detail.
I think this sort of holds together, but I think it needs fleshing out a bit. What I've been thinking about lately is how those big headline beliefs (ie Left vs Right) sit on top of a stack of smaller ideas (what I'm going to call 'prop' beliefs). Now in order to shift fundamentally from one position to another, and for your new position to be sound, I think you must need those prop beliefs to support your big new idea. But how do you acquire them?
Personally, I have shifted a lot in the direction of letting people get on with things rather than trying to direct them centrally, so I'm much more comfortable with markets (in general) than I have been before. In addition I'm much less confident about the degree of control that is possible in any case. But how have I acquired these views?
In large part it is the result of work-related reading, but I can split this into sub-categories. There are areas that I understand well, where I think what I have done is looked at the empirical evidence (ie investment performance figures) and reached a given conclusion. Then there are the bits that I understand less well, where I think I have largely accepted propositions that sound reasonable (probably because they have narrative rationality) and/or because they come with the social proof of being advocated by 'experts'. So these are the prop beliefs supporting my revised perspectives.
Hopefully that all makes sense so far, but what interests me is which comes first, the big ideas, or the prop beliefs? Intuitively the changes in the prop beliefs ought to affect the bigger beliefs that sit on top of them. But going back to my bit of blurb from a previous post, I wonder whether it doesn't sometimes happen the other way around.
Perhaps we don't always acquire our big reliefs rationally by building on top of smaller established prop beliefs. Perhaps (for whatever reason) we acquire the big belief and then rapidly establish the props in our mind required to support it. That would seem to fit more easily with those cases where people switch completely from one perspective to another. In my own case, have I made up my mind to adopt new beliefs, and am now working on the evidence to support them, or do the prop beliefs come first? What's sauce for the goose and all that... Tricky one eh?
Off on a bit of a tangent, I'm always amazed to read pieces arguing that people are afraid to criticise [insert minority group, default option: muslims] for fear of being seen as prejudiced. I'm particularly amazed when I read such articles in national newspapers that regularly run negative articles about minorities. As far I am concerned it is patently false that people are prevented from speaking out, because they seem to do it all the time.
What is also noticeable is that vehemence and self-righteousness with which these claims are made, and I wonder whether that isn't actually a bit of a clue to what is going on. For whatever reason, many people do feel a bit shameful about laying into minorities. However by instituting the prop belief that they are being censored from saying what they want they are able to utilise this to access a feeling of self-righteousness in 'speaking out'. In this example the prop belief would seem to be used to address cognitive dissonance arising from their desire to say something and their desire not to be seen as insensitive/prejudiced.
Or am I talking rubbish?
Friday, 4 July 2008
Machines and metaphors
A passing reference to Paul Ormerod in this post on Stumbling and Mumbling made me go and have a quick flick through my copy of Why Most Things Fail. I found it quite a challenging read, as he is very sceptical about the record of social democracy. But to be honest I have quite a bit of sympathy with the general pitch that it's very difficult to plot a successful course of action given an unknowable future. There's a great bit early on where he translates a statement from a director of GM about a new launch as basically stating "Our new product model might do well, or it might not. We don't know."
But the bit I was reminded of when flicking through the book last night was this nice little para late on where Ormerod is describing Hayek's views:
The bit I really like is the reference seeing systems as machines. I think this is a widespread and fundamental misconception. In my bit of the world I think it plagues some attempts at systemic reform. I think that there is an implicit assumption, for example, that the investor-company relationship would function radically differently if only different information was being fed into the machine.
(Incidentally this is another reason why I continue to think that the unions could be a serious force here. There has always been an element of scepticism on the unions' part about SRI because of its failure to address labour issues effectively. As a result they have often gone off and done their own thing, often quite successfully. Union investor activism can be successful (in my view) because it has immediate goals, rather than a systemic approach. Unions still need to have a view on the systemic issues, but seem to get less bogged down in this area than SRI proponents.)
More broadly I think this conception of systems as machines is further evidence of the way we attracted to certain ideas and ways of viewing the world. Seeing systems as machines is comfortable because it implies that control is possible, and that you can have an idea of what the system working well looks like (and therefore how to achieve good results). I think these little short-cuts we use to describe bits of the world must have a fairly significant influence on how we understand it too.
And with that in mind, my next book purchase is going to be this.
But the bit I was reminded of when flicking through the book last night was this nice little para late on where Ormerod is describing Hayek's views:
The visions of the world articulated by orthodox economics and by Hayek are fundamentally different. Conventional theory describes a highly structured mechanical system. Both the economy and society are in essence giant machines, whose behaviour can be controlled and predicted. Hayek's view is much more rooted in biology. Individual behaviour is not fixed, like a screw or cog in a machine is, but evolves in response to the behaviour of others. Control and prediction of the system as a whole is simply not possible.
The bit I really like is the reference seeing systems as machines. I think this is a widespread and fundamental misconception. In my bit of the world I think it plagues some attempts at systemic reform. I think that there is an implicit assumption, for example, that the investor-company relationship would function radically differently if only different information was being fed into the machine.
(Incidentally this is another reason why I continue to think that the unions could be a serious force here. There has always been an element of scepticism on the unions' part about SRI because of its failure to address labour issues effectively. As a result they have often gone off and done their own thing, often quite successfully. Union investor activism can be successful (in my view) because it has immediate goals, rather than a systemic approach. Unions still need to have a view on the systemic issues, but seem to get less bogged down in this area than SRI proponents.)
More broadly I think this conception of systems as machines is further evidence of the way we attracted to certain ideas and ways of viewing the world. Seeing systems as machines is comfortable because it implies that control is possible, and that you can have an idea of what the system working well looks like (and therefore how to achieve good results). I think these little short-cuts we use to describe bits of the world must have a fairly significant influence on how we understand it too.
And with that in mind, my next book purchase is going to be this.
Wednesday, 18 June 2008
Narrative rationality
Yep, narratives again. As I've said before, I'm a big fan of the narrative paradigm proposed by Walter Fisher. I've just got hold of this book, which includes his essay Narration, Knowledge and the Possibility of Wisdom. I think it's spot on. In a nutshell he argues that we assess things in terms of narrative rationality, 'good reasons' to believe/accept them or not.
To explain this in more detail, he argues that we determine these 'good reasons' by making assessments of the degree of coherence and fidelity within the proposition ('story'). Our assessment of coherence considers structural/argumentative coherence - does the argument hang together. We consider material coherence - comparing the 'story' with other relevant ones we can detect errors or omissions, does it fit with other 'true' accounts. And we consider characterological coherence - what do we think of the intelligence, intregrity and values of the author of the 'story'?
Turning to fidelity we consider both the reasons given, and the values conveyed. In the first case we test things such as whether the bits of the narratives claimed as facts are indeed facts, and whether the reasoning is sound. In the second instance we try to establish the explicit and implicit values in the story, and whether, for instance, they are validated by our own experience.
Apologies if that sounds a bit dry, but it's important to set the structure up before seeing how he applies it. You maybe thinking that narrative rationality may well apply in say the political field, but Wisher argues that it even applies in science. He demonstrates this by applying his model to an analysis of James Watson and Francis Crick's proposal of the double helix model of DNA.
This goes on for several pages so I'm just going to post up a chunk that gives you a good idea (apologies for typos, I am copying straight from the book):
I read this and am both a bit ashamed and at the same enthused. Ashamed because thinking things through from Fisher's perspective, I can see how I have structured things in the past to ensure they have narrative rationality. I look back at a few policy papers I wrote and consider them to be collages rather than anything else. But because I think I am relatively good at structural/argumentative coherence I can make collages look more like analysis than they actually are.
On the other hand I also read this and realise that I can see people of different views (political in particular) doing exactly the same thing, and I think internalising this way of reading information can help sort the wheat from the chaff (if you believe that there is any wheat). This shouldn't be any surprise unless you believe that there is a correlation between proficiency in argument construction and political viewpoint.
It might cause me to kill some blog posts mid-composition too, as I can sometimes feel myself trying to give an argument coherence and fidelity that it may not deserve. Whether that's a good thing or not, I don't know.
To explain this in more detail, he argues that we determine these 'good reasons' by making assessments of the degree of coherence and fidelity within the proposition ('story'). Our assessment of coherence considers structural/argumentative coherence - does the argument hang together. We consider material coherence - comparing the 'story' with other relevant ones we can detect errors or omissions, does it fit with other 'true' accounts. And we consider characterological coherence - what do we think of the intelligence, intregrity and values of the author of the 'story'?
Turning to fidelity we consider both the reasons given, and the values conveyed. In the first case we test things such as whether the bits of the narratives claimed as facts are indeed facts, and whether the reasoning is sound. In the second instance we try to establish the explicit and implicit values in the story, and whether, for instance, they are validated by our own experience.
Apologies if that sounds a bit dry, but it's important to set the structure up before seeing how he applies it. You maybe thinking that narrative rationality may well apply in say the political field, but Wisher argues that it even applies in science. He demonstrates this by applying his model to an analysis of James Watson and Francis Crick's proposal of the double helix model of DNA.
This goes on for several pages so I'm just going to post up a chunk that gives you a good idea (apologies for typos, I am copying straight from the book):
What arguments do the authors offer to support the truthfulness of the double-helix model? Their first argument was that the structure proposed by Linus Pauling and R. B. Coring was "unsatisfactory". The underlying reason for th erejection was that the Pauling-Corey model was not truthful; it violated chemical "laws" and prior research. Their second argument was that the structure put forward by Fraser was too "ill-defined" to warrant comment. Clearly, precision is a value and lack of it is sufficent reason, a good reason, for rejecting ideas that are "ill-defined". After describing their model - verbally and in diagram - Watson and Crick present an intertwined argument to establish its conformity with the "laws" of chemistry and current research data. Here again, there is an implication of a good reason: Sound theory is in accord with prior knowledge. Each of these three lines of argument, it should be noted, is not a strict logical demonstration, either deductive or inductive. Each is, however, a proper deductive argument if one grants the premise on which it is founded: good theory is truthful (that of Pauling-Corey is not truthful; it should be rejected); good theory is precise (Fraser's theory is not precise; it should be rejected); good theory is confirmed by the best available theory and evidence (ours is; therefore, it should be accepted). The "reasons" for accepting Watson and Crick's proposal, then, are good reasons, reasons informed by by values: truthfulness, precision, conformity with the best that is known, and the promise of useful results in its application in further theory and research.
I read this and am both a bit ashamed and at the same enthused. Ashamed because thinking things through from Fisher's perspective, I can see how I have structured things in the past to ensure they have narrative rationality. I look back at a few policy papers I wrote and consider them to be collages rather than anything else. But because I think I am relatively good at structural/argumentative coherence I can make collages look more like analysis than they actually are.
On the other hand I also read this and realise that I can see people of different views (political in particular) doing exactly the same thing, and I think internalising this way of reading information can help sort the wheat from the chaff (if you believe that there is any wheat). This shouldn't be any surprise unless you believe that there is a correlation between proficiency in argument construction and political viewpoint.
It might cause me to kill some blog posts mid-composition too, as I can sometimes feel myself trying to give an argument coherence and fidelity that it may not deserve. Whether that's a good thing or not, I don't know.
Monday, 2 June 2008
Roland Barthes and bad writing
This is a bit off-topic. I've just ordered my latest stack of books from Amazon, and in amongst some work-oriented reading I thought I would take a punt on Mythologies by Roland Barthes. I read a fairly incomprehensible intro to semiotics last year, and his name cropped up a bit, plus the idea of 'myths' seems close to the concept of narratives so I thought I'd probably quite like it.
And I do, sort of. The essay on wrestling is apparently pretty famous and I can see why. It does really nail what watching wrestling is all about (not suprisingly it's not about sport). I also enjoyed the piece about the Blue Blood Cruise, which reminded me of the time the royals did It's A Knockout. Here's an excerpt:
Spot on Ro-land.
Other bits of it are less convincing, and seem to be more about Barthes identifying his own message in an event/product/trend. Still, overall it seems like quite an interesting look at the messages within culture. And somehow I was reminded of this when I was reading Janet Daley's latest opinion piece in the Telegraph. It's this section that really stuck out:
It's the second sentence that is the key one. The alarm bells always go off when I read someone trying to personify an idea. In more subtle versions of this people try and associate ideas with particular types of people. But the version above is great because it actually suggests that an idea itself can have some sort of physical manifestation - one that can feel 'disbelief' and 'agony' and is capable of 'lashing out'. No doubt lots of people will read the column without giving this a second thought, and may even perhaps imagine terrified, confused, theoreticians/bureaucrats to be the ones feeling the agony and doing the lashing out. But in the article itself it is clearly an idea doing this, something which is obviously impossible.
I'm never quite sure with examples like this which way the influence runs. Is the writer seeking to attach negative imagery to the idea deliberately? Or is their resistance to or dislike of the idea so great that they imagine it as a 'bad' person? Either way it is a poor way to write/think.
And I do, sort of. The essay on wrestling is apparently pretty famous and I can see why. It does really nail what watching wrestling is all about (not suprisingly it's not about sport). I also enjoyed the piece about the Blue Blood Cruise, which reminded me of the time the royals did It's A Knockout. Here's an excerpt:
"[K]ings have a superhuman essence, and when they temporarily borrow certain forms of democratic life, it can only be through an incarnation which goes against nature, made possible through condescension alone. To flaunt the fact that kings are capable of prosaic actions is to recognize that this status is no more natural to them than angelism is to mere mortals, it is to acknowledge that the king is still king by divine right."
Spot on Ro-land.
Other bits of it are less convincing, and seem to be more about Barthes identifying his own message in an event/product/trend. Still, overall it seems like quite an interesting look at the messages within culture. And somehow I was reminded of this when I was reading Janet Daley's latest opinion piece in the Telegraph. It's this section that really stuck out:
The notion that Big Government (whether in the central or the local form) could solve all social problems, and through its interventions achieve absolute justice and harmony, is collapsing. And in its last moments, in its disbelief and agony at its own failure, it is lashing out in every direction: if the earlier measures haven't dealt with crime/public disorder/anti-social behaviour/under-performing hospitals/insufficient recycling, we must add yet more layers of official interference.
It's the second sentence that is the key one. The alarm bells always go off when I read someone trying to personify an idea. In more subtle versions of this people try and associate ideas with particular types of people. But the version above is great because it actually suggests that an idea itself can have some sort of physical manifestation - one that can feel 'disbelief' and 'agony' and is capable of 'lashing out'. No doubt lots of people will read the column without giving this a second thought, and may even perhaps imagine terrified, confused, theoreticians/bureaucrats to be the ones feeling the agony and doing the lashing out. But in the article itself it is clearly an idea doing this, something which is obviously impossible.
I'm never quite sure with examples like this which way the influence runs. Is the writer seeking to attach negative imagery to the idea deliberately? Or is their resistance to or dislike of the idea so great that they imagine it as a 'bad' person? Either way it is a poor way to write/think.
Saturday, 8 March 2008
Language and knowledge
Here's an interesting thought. Do you think you can understand any concept, providing that it is explained clearly enough? Sometimes (I won't say how often) when I am reading things I struggle to make sense of what the author is trying to get across. Is that because I am unable to grasp the concept, or is it because the author is failing to communicate clearly enough? When I have asked people this question I am surprised how much faith we have in our own intelligence. Most people think they can grasp any concept provided that it is explained clearly enough. So if people only wrote better books we would all be experts.
I am not convinced. Yesterday whilst I was killing time before my flight I was reading a section of AC Grayling's Wittgenstein: A Very Short Introduction where he sought to get across a point about different 'language games'. I must have read and re-read those few paragraphs about six times struggling to get the meaning, and I still don't quite grasp it (maybe I'll have a cuppa and another go later). I would put money on it that AC Grayling is smarter than I am, and that he is quite capable of explaining things clearly. So I have to assume I am (or was being) a bit dense.
What's more, I became aware that in trying to get my hands on the meaning of the passage I was trying to find something familiar in it. In other words I think I was trying to understand the concept by sort of sub-dividing it by something I do already 'know'. Of course it's partly mental laziness. In fact someone I posed the original question to said that yes you can grasp anything, provided that you want to. I am sure that your attitude must play a role but don't quite agree. My view is that it's about familiarity. It's hard learning new things, it's much easier sort of 'spotting' concepts you already get. People seem to derive great enjoyment from doing well what they know how to do. Maybe the same process is at work when we try to understand something - it's more enjoyable to think about it in a way we already know how to think. We enjoy the familiarity.
In epistemology one of the big theoretical divides is between "knowledge that" and "knowledge how". Here's how the Wiki article on epistemology explains it:
I guess in this instance "knowledge that" is what I describe above as "familiar". So what surprises me is that even in an area like philosophy "knowledge that", or maybe our desire for "knowledge that", can play a big role. And it leads me to question whether maybe a lot of what we take to be "knowledge how" is actually "knowledge that". Because it is very familiar to us we think it is more conceptual than it actually is.
The caveat is of course that maybe this is just the way my mind operates, maybe other people approach knowledge differently. Indeed I do think that people have differing tastes for certain "forms" of concepts, and that these can be developed. But I think generally we are more similar than we are different and as such others must suffer from the same failings that I do.
I am not convinced. Yesterday whilst I was killing time before my flight I was reading a section of AC Grayling's Wittgenstein: A Very Short Introduction where he sought to get across a point about different 'language games'. I must have read and re-read those few paragraphs about six times struggling to get the meaning, and I still don't quite grasp it (maybe I'll have a cuppa and another go later). I would put money on it that AC Grayling is smarter than I am, and that he is quite capable of explaining things clearly. So I have to assume I am (or was being) a bit dense.
What's more, I became aware that in trying to get my hands on the meaning of the passage I was trying to find something familiar in it. In other words I think I was trying to understand the concept by sort of sub-dividing it by something I do already 'know'. Of course it's partly mental laziness. In fact someone I posed the original question to said that yes you can grasp anything, provided that you want to. I am sure that your attitude must play a role but don't quite agree. My view is that it's about familiarity. It's hard learning new things, it's much easier sort of 'spotting' concepts you already get. People seem to derive great enjoyment from doing well what they know how to do. Maybe the same process is at work when we try to understand something - it's more enjoyable to think about it in a way we already know how to think. We enjoy the familiarity.
In epistemology one of the big theoretical divides is between "knowledge that" and "knowledge how". Here's how the Wiki article on epistemology explains it:
For example: in mathematics, it is known that 2 + 2 = 4, but there is also knowing how to add two numbers. Many (but not all) philosophers thus think there is an important distinction between "knowing that" and "knowing how", with epistemology primarily interested in the former. This distinction is recognised linguistically in many languages, though not in modern English except as dialect (see verbs "ken" and "wit" in the Shorter Oxford Dictionary).
I guess in this instance "knowledge that" is what I describe above as "familiar". So what surprises me is that even in an area like philosophy "knowledge that", or maybe our desire for "knowledge that", can play a big role. And it leads me to question whether maybe a lot of what we take to be "knowledge how" is actually "knowledge that". Because it is very familiar to us we think it is more conceptual than it actually is.
The caveat is of course that maybe this is just the way my mind operates, maybe other people approach knowledge differently. Indeed I do think that people have differing tastes for certain "forms" of concepts, and that these can be developed. But I think generally we are more similar than we are different and as such others must suffer from the same failings that I do.
Thursday, 31 January 2008
Rambling about writing and thinking
As I have mentioned before, I'm quite a fan of the narrative paradigm - the idea that we are principally story-telling animals and that most of our communication takes the story-telling form. We are faced by a complex world which we struggle to make sense of, so it is is easier to understand it in terms different narratives, each with plots and characters in them.
One of the reasons I find this a compelling way of looking at the way we understand reality is the number of times you find personality-based (or dispositional to use the lingo) explanations of events cropping up in political 'analysis', or the way people try and personify political opinions they disagree with. The fact that we often rely on dispositional explanations for events is surprising. Unless we were able to run the same event over and over again and witness the decisions made by people of differing personality types it's difficult to see why we should have confidence that the character of the individual is the swing factor. Yet such explanations crop up all the time. As an example here's a line from the Telegraph business section last year about Mervyn King's role in the Northern Rock crisis:
Much journalistic writing, even that which is offered up as analysis, strikes me as very obviously falling into the story-telling category. This is probably not surprising given that many journalists are not experts in the fields they cover. Hence what they write tends to rely on narrative explanations rather than a genuine understanding both of what is happening and why it is happening. Personalities play a big role, as do simple cause and effect interpretations. A good example from the financial world is the impact of the abolition of dividend tax credits on pension funds. It clearly did not help as it reduced the investment income available, but I think it's simply wrong to suggest it is the 'cause' of the closure of final salary schemes. Yet it is frequently referred to as the 'cause' because it's an easy way to explain what happened and means that we can put blame on a decision made by an individual. The alternative is to see the closures as the result of an unhelpful combination of a number of factors. But that's a complicated explanation and not much fun.
Don't underestimate our attachment to our narratives. The first Pensions Commission report covered the closure of final salary schemes and basically made the point that getting rid of tax credits wasn't the cause (they identified various factors - mortality, the post-2000 bear market, increasing regulation guaranteeing benefits that were previously discretionary etc). I noticed at the time that when faced with this challenge to their narrative at least one well-known financial journo interpreted this as being the result of Adair Turner trying to avoid saying anything politically embarrassing. Whilst I'm not naive enough to think such things don't happen, why overlook the possibility that the report was telling the truth? Maybe because the dispositional get-out means that the journo can continue to use their narrative short-cut understanding of the issue, rather than considering the evidence and perhaps revising their opinion.
I think the desire for narratives also manifests itself when people radically change their political views. It's notable that when some people's politics change they often seem to go through a wholesale change. We can all think of examples of those formerly some way out on the Left who subsequently became thoroughly right-wing. That is of course absolutely their right, but it is surprising that a number of such people seem to shift their view from Left to Right on each and every issue. It's unlikely that any 'side' is the sole repositery for truth, therefore wouldn't we expect to see more people become unaligned (as they realise that the side they had affiliated with is 'wrong' on certain issues) or simply moderate their views, rather than shift from one pole to another? If people are rationally considering each and every issue we might well expect to find this happen, but if we are principally in the business of buying narratives then maybe it's not surprising to see people chuck out one and replace it with a diffrent one. It's a lot easier than thinking through each issue in detail.
I'm not claiming any superiority here - I am as bad as anyone. I used to go on a mainstream news messageboard quite a bit to talk (mainly) politics with people. But I grew frustrated at the nature of the discussions that took place. It was almost as if there was a sort of choreography at work. If Right-Wing Person A makes Point 1, then the 'correct' response from Left-Wing Person B is Counterpoint 2. There is clearly an element of pattern recognition going on when you are involved in this type of 'discussion'. Of course George Orwell got there first. This is from Politics And The English Language:
On the messageboard I was no better than anyone else at avoiding this, but after a while I could at least recognise that I was feeling pulled towards writing certain responses without really thinking about the content of the proposition I was about to challenge or that of my response. As a result I increasingly found myself not responding to things unless I felt a) I knew what I was talking about and b) I was actually going to add something to the discussion.
In real life I still find it a problem. When I am trying to understand what is going in the financial world I often feel the pull to adopt a mini conspiracy theory about why a particular organisation has done something (or not done something) rather than looking at the evidence. One plus point in all this is that I actually feel a lot better when I have put the effort in to understand an issue properly. Despite the strong desire we feel to adopt a certain narrative, I think that once you make yourself aware of the process, and how it can lead you to a wonky understanding of an issue, you can act to correct it. I think I am slowly training myself to be uneasy when I feel that my understanding of an issue is narrative-driven.
Of course you can only take this so far. Conversation and writing would get pretty slow and boring if it was reduced to a collection of evidence-based propositions and their refutation. But I have this recurring thought that unless you do keep plugging away to try and get some sort of handle on the 'truth' of an issue then we simply end up with our heads full of narratives that add very little of value to our understanding the world.
One of the reasons I find this a compelling way of looking at the way we understand reality is the number of times you find personality-based (or dispositional to use the lingo) explanations of events cropping up in political 'analysis', or the way people try and personify political opinions they disagree with. The fact that we often rely on dispositional explanations for events is surprising. Unless we were able to run the same event over and over again and witness the decisions made by people of differing personality types it's difficult to see why we should have confidence that the character of the individual is the swing factor. Yet such explanations crop up all the time. As an example here's a line from the Telegraph business section last year about Mervyn King's role in the Northern Rock crisis:
"More than ever it looks like King has been reading from the wrong page of the regulatory manual, betraying his background as a clever academic when the situation required the gut feel of a banker."
Much journalistic writing, even that which is offered up as analysis, strikes me as very obviously falling into the story-telling category. This is probably not surprising given that many journalists are not experts in the fields they cover. Hence what they write tends to rely on narrative explanations rather than a genuine understanding both of what is happening and why it is happening. Personalities play a big role, as do simple cause and effect interpretations. A good example from the financial world is the impact of the abolition of dividend tax credits on pension funds. It clearly did not help as it reduced the investment income available, but I think it's simply wrong to suggest it is the 'cause' of the closure of final salary schemes. Yet it is frequently referred to as the 'cause' because it's an easy way to explain what happened and means that we can put blame on a decision made by an individual. The alternative is to see the closures as the result of an unhelpful combination of a number of factors. But that's a complicated explanation and not much fun.
Don't underestimate our attachment to our narratives. The first Pensions Commission report covered the closure of final salary schemes and basically made the point that getting rid of tax credits wasn't the cause (they identified various factors - mortality, the post-2000 bear market, increasing regulation guaranteeing benefits that were previously discretionary etc). I noticed at the time that when faced with this challenge to their narrative at least one well-known financial journo interpreted this as being the result of Adair Turner trying to avoid saying anything politically embarrassing. Whilst I'm not naive enough to think such things don't happen, why overlook the possibility that the report was telling the truth? Maybe because the dispositional get-out means that the journo can continue to use their narrative short-cut understanding of the issue, rather than considering the evidence and perhaps revising their opinion.
I think the desire for narratives also manifests itself when people radically change their political views. It's notable that when some people's politics change they often seem to go through a wholesale change. We can all think of examples of those formerly some way out on the Left who subsequently became thoroughly right-wing. That is of course absolutely their right, but it is surprising that a number of such people seem to shift their view from Left to Right on each and every issue. It's unlikely that any 'side' is the sole repositery for truth, therefore wouldn't we expect to see more people become unaligned (as they realise that the side they had affiliated with is 'wrong' on certain issues) or simply moderate their views, rather than shift from one pole to another? If people are rationally considering each and every issue we might well expect to find this happen, but if we are principally in the business of buying narratives then maybe it's not surprising to see people chuck out one and replace it with a diffrent one. It's a lot easier than thinking through each issue in detail.
I'm not claiming any superiority here - I am as bad as anyone. I used to go on a mainstream news messageboard quite a bit to talk (mainly) politics with people. But I grew frustrated at the nature of the discussions that took place. It was almost as if there was a sort of choreography at work. If Right-Wing Person A makes Point 1, then the 'correct' response from Left-Wing Person B is Counterpoint 2. There is clearly an element of pattern recognition going on when you are involved in this type of 'discussion'. Of course George Orwell got there first. This is from Politics And The English Language:
"A speaker who uses that kind of phraseology has gone some distance towards turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself. If the speech he is making is one that he is accustomed to make over and over again, he may be almost unconscious of what he is saying, as one is when one utters the responses in church."
On the messageboard I was no better than anyone else at avoiding this, but after a while I could at least recognise that I was feeling pulled towards writing certain responses without really thinking about the content of the proposition I was about to challenge or that of my response. As a result I increasingly found myself not responding to things unless I felt a) I knew what I was talking about and b) I was actually going to add something to the discussion.
In real life I still find it a problem. When I am trying to understand what is going in the financial world I often feel the pull to adopt a mini conspiracy theory about why a particular organisation has done something (or not done something) rather than looking at the evidence. One plus point in all this is that I actually feel a lot better when I have put the effort in to understand an issue properly. Despite the strong desire we feel to adopt a certain narrative, I think that once you make yourself aware of the process, and how it can lead you to a wonky understanding of an issue, you can act to correct it. I think I am slowly training myself to be uneasy when I feel that my understanding of an issue is narrative-driven.
Of course you can only take this so far. Conversation and writing would get pretty slow and boring if it was reduced to a collection of evidence-based propositions and their refutation. But I have this recurring thought that unless you do keep plugging away to try and get some sort of handle on the 'truth' of an issue then we simply end up with our heads full of narratives that add very little of value to our understanding the world.
Friday, 11 January 2008
Anti-anti-war propaganda and knowledge
I found this article in the WSJ, following a link in a comment on the perpetually outraged SWP-aligned blog Lenin's Tomb. For once the Trots actually have a fair reason to kick off as this opinion piece is pretty rubbish, as you can tell from the first para.
The last two sentences really bother me. The assertion is broadly that you can know that a proposition is false without knowing why. Maybe I've been reading too much blah about epistemology lately, but I'm a bit sceptical that this is actually possible. I use my own lack of knowledge on this subject as a case in point. I also found the Lancet figure problematic, it sounded way too high. But if I am honest my 'knowledge' of the real level of deaths in Iraq is drawn solely from the media, and has no empirical basis. It is therefore quite possible/likely that my idea of what of a reasonable estimate of these numbers would be is anchored by previous numbers that I have heard used (for example by Iraq Body Count).
I don't therefore think that my feeling that the Lancet figure was wrong is justified true belief, and from reading the WSJ piece I'm fairly sure the author is in the same boat. I would argue that whoever wrote the WSJ piece believed, rather than knew, that the Lancet figures were exaggerated, and based on what they have written I don't think they know 'why' they hold the belief either. The 'justification' (the 'why') does not 'justify'. Therefore this article is actually more a useful example of confirmation bias in practice than anything else.
For example, when you look at the actual 'evidence' the WSJ uses to explain 'why' the report exaggerated the death toll in Iraq, the argument looks threadbare. It's principally a list of the political affiliations of those involved with producing, funding and publishing the report, not an examination of their data or methodology. In other words the report findings are wonky because of the politics of those involved. I'm not at all questioning that beliefs can influence our assessment of evidence, but you need to prove that this has happened, not just imply that it has taken place. Otherwise it's just an ad hominem attack, surely?
The one bit in the article that gets anywhere close to querying the data is actually buried down the bottom. The legitimate criticism here is that the Iraqi researcher involved in the report did not make his data available to others. But it isn't explored any further. The point that he also made claims about depleted uranium is again more along the lines of 'he says things we don't like, therefore his evidence is flawed'.
Finally, the last paragraph is also worth a look:
As regular readers will know, I enjoy the exposure of narratives masquerading as knowledge, but there are several things wrong with this. On a practical point from what I remember the Lancet study was challenged in a number of places. I certainly debated it with people on messageboards, and I remember Dubya's comment that the study was 'not credible' being featured in media reports. Secondly, the article does not prove that the Lancet report is unreliable, even though I share the belief (rather than justifed true belief, ie knowledge) that the figures do not stack up.
But more broadly the paragraph could have been written about the coverage of any number of reports by non-specialist media. Skewed and unreliable reports are trumpeted by differing political factions all the time. And if the line being advanced fits the narrative the media want then such studies get swallowed hook, line and sinker. Shockingly this applies to Fox News and the WSJ as much as it does to the BBC and The Grauniad. So while I agree with the thrust of the WSJ's piece here personally I think it's not really a comment about the Lancet report, or the claims of the anti-war movement. It's actually a comment about how unreliable the media can be as sources of information - a point this article itself unwittingly demonstrates very well.
Three weeks before the 2006 elections, the British medical journal Lancet published a bombshell report estimating that casualties in Iraq had exceeded 650,000 since the U.S.-led invasion in March 2003. We know that number was wildly exaggerated. The news is that now we know why.
The last two sentences really bother me. The assertion is broadly that you can know that a proposition is false without knowing why. Maybe I've been reading too much blah about epistemology lately, but I'm a bit sceptical that this is actually possible. I use my own lack of knowledge on this subject as a case in point. I also found the Lancet figure problematic, it sounded way too high. But if I am honest my 'knowledge' of the real level of deaths in Iraq is drawn solely from the media, and has no empirical basis. It is therefore quite possible/likely that my idea of what of a reasonable estimate of these numbers would be is anchored by previous numbers that I have heard used (for example by Iraq Body Count).
I don't therefore think that my feeling that the Lancet figure was wrong is justified true belief, and from reading the WSJ piece I'm fairly sure the author is in the same boat. I would argue that whoever wrote the WSJ piece believed, rather than knew, that the Lancet figures were exaggerated, and based on what they have written I don't think they know 'why' they hold the belief either. The 'justification' (the 'why') does not 'justify'. Therefore this article is actually more a useful example of confirmation bias in practice than anything else.
For example, when you look at the actual 'evidence' the WSJ uses to explain 'why' the report exaggerated the death toll in Iraq, the argument looks threadbare. It's principally a list of the political affiliations of those involved with producing, funding and publishing the report, not an examination of their data or methodology. In other words the report findings are wonky because of the politics of those involved. I'm not at all questioning that beliefs can influence our assessment of evidence, but you need to prove that this has happened, not just imply that it has taken place. Otherwise it's just an ad hominem attack, surely?
The one bit in the article that gets anywhere close to querying the data is actually buried down the bottom. The legitimate criticism here is that the Iraqi researcher involved in the report did not make his data available to others. But it isn't explored any further. The point that he also made claims about depleted uranium is again more along the lines of 'he says things we don't like, therefore his evidence is flawed'.
Finally, the last paragraph is also worth a look:
In other words, the Lancet study could hardly be more unreliable. Yet it was trumpeted by the political left because it fit a narrative that they wanted to believe. And it wasn't challenged by much of the press because it told them what they wanted to hear. The truth was irrelevant.
As regular readers will know, I enjoy the exposure of narratives masquerading as knowledge, but there are several things wrong with this. On a practical point from what I remember the Lancet study was challenged in a number of places. I certainly debated it with people on messageboards, and I remember Dubya's comment that the study was 'not credible' being featured in media reports. Secondly, the article does not prove that the Lancet report is unreliable, even though I share the belief (rather than justifed true belief, ie knowledge) that the figures do not stack up.
But more broadly the paragraph could have been written about the coverage of any number of reports by non-specialist media. Skewed and unreliable reports are trumpeted by differing political factions all the time. And if the line being advanced fits the narrative the media want then such studies get swallowed hook, line and sinker. Shockingly this applies to Fox News and the WSJ as much as it does to the BBC and The Grauniad. So while I agree with the thrust of the WSJ's piece here personally I think it's not really a comment about the Lancet report, or the claims of the anti-war movement. It's actually a comment about how unreliable the media can be as sources of information - a point this article itself unwittingly demonstrates very well.
Subscribe to:
Posts (Atom)