[ home ] [ pony / townhall / rp / canterlot / rules ] [ arch ]

/townhall/ - Townhall

A place for civilized animals
Name
Email
Subject
Comment
File
Flags  
Embed
Password (For file deletion.)

[Return][Go to bottom]

 No.4

File: 1559622788167.jpg (49.91 KB, 960x720, 4:3, Hard Determinism Compatibi….jpg) ImgOps Exif Google

Today I heard a new term called compatibilism and it got me looking at this whole philosophical discussion.

I consider myself a hard determinist, though I'm honestly struggling to grasp the concept of compatibilism and what it implies. So I'm open to being that, it is possible that I claim I don't believe in free will while simultaneously not understanding what a compatibilist thinks free will is.

For instance I told a friend he should make a choice, to try to be better, to not let his emotion get the best of him. Does that mean I think he has the free will to choose that for himself? No, but what I know is that our determined course of action is calculated based on the entire state of the universe, including the words I have said to him. So, in effect, telling him to make a choice has not altered fate, but it was a component of fate in motion and that input into the cosmic equation was a necessary part of him getting from predetermined state to the next predetermined state.

Can anyone help me better understand what compatibilism is? It's completely frying my brain trying to understand the concept as I've been reading about it. Like, everything I read about it sounds precisely like hard determinism but just wrapped up in a way in which we can still hold people accountable morally for their predetermined actions. Maybe, that is precisely what it is? Thoughts?

 No.5

Determinists believe that every event is necessitated by antecedent events in conjunction with the laws of nature. Compatibilists are also a form of determinists, and they hold, that because free will is often taken to be the ability to act freely from arbitrary influence, we should not think of free will and determinism as being incompatible (Stanford Encyclopedia of Philosophy).

Generally speaking, only people that believe that free will must necessearily be libertarian free will, and who also beleive that everything in the world is caused by antecedent causes and their interaction with natural laws, are hard determinists (Stanford Encyclopedia of Philosophy)

The problem with being a determinist without being a compatibilist, is that you end up without an answer to the free will problem. The free will problem is essentially that if every event is causally determined, then no person can be said to hold responsibility for their actions, and we can't ever say, that anyone chooses to do anything, because they could not have acted in any other way than they did.

If you're a compatibilist, you understand that our common understanding of the idea of choosing, ideas of personal responsibility and choice, do not include the ability to make other actions than you necessearily will. Because we draw these associations so often in language, it is inconsistent to believe, that free will must defined, as many incompatibilists hold, that a willed action must originate in the acting agent for that action to be freely willed (Daniel Podgorski).

>Can anyone help me better understand what compatibilism is? It's completely frying my brain trying to understand the concept as I've been reading about it. Like, everything I read about it sounds precisely like hard determinism but just wrapped up in a way in which we can still hold people accountable morally for their predetermined actions. Maybe, that is precisely what it is?
Yes this is precisely what it is! It's just a form of determinism that is compatible with the way we tend to use language.

So I would say also, that as a compatibilist you can still be lenient on people. You can still say, although this person chose to act in the way they did, I know that the action did not originate within them, but was caused by antecedent causes, which means that I think we should not hold him personally accountable.

Notice that even when I make this statement, I might have to use ideas of choice. You say we should not hold him accountable, meaning I am appealing to us making one choice over the other. I could not do this, if I wasn't at some level tacidly accepting, that free will in every day language, is not libertarian free will, but compatibilist free will. This is why I personally consider determinism without compatibilism, a very non-functional view.


Sources
    Unknown author (Feb 25, 2015). Compatibilism. Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/entries/compatibilism/

    Daniel Podgorski (October 16, 2015). Free Will Twice Defined. Retrieved from: https://thegemsbok.com/art-reviews-and-articles/philosophy-articles-friday-phil-free-will-determinism-compatibilism/

 No.6

>>5
>Compatibilists are also a form of determinists, and they hold, that because free will is often taken to be the ability to act freely from arbitrary influence, we should not think of free will and determinism as being incompatible

Can you elaborate on what 'arbitrary influence' means here?

>>5
>you end up without an answer to the free will problem
If I'm a determinist, isn't my answer that there is no free will and that's just how things are? What problem am I not addressing? My worldview is logically consistent, as far as I'm seeing.

>do not include the ability to make other actions than you necessearily will
Okay, let's imagine you are thinking about what to eat for breakfast. I pause the universe, and I examine every molecule and atom and all the chemicals in your brain and all the synapses, etc. And then, let's say I have a perfect understanding of how to interpret this state. Could I predict the next state from this one? Would I be able to tell which synapses would fire in which order in your brain to the point that I could tell before you what you would choose? If not, what is the piece of the puzzle I would be missing to be unable to do this? What's the secret sauce that creates free will as you define it?

>>5
>This is why I personally consider determinism without compatibilism, a very non-functional view.
Are you saying it is non functional because you believe in personal responsibility and you want to hold people accountable? I'm not seeing what makes compatibilist more functional than determinist. It feels to me to be a compromise that sacrifices the fundamental truth for the sake of convenience. Is there more or something different to it than that?

 No.7

>>5
>Unknown author (Feb 25, 2015). Compatibilism. Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/entries/compatibilism/
Okay, so I don't have enough time right now to read this whole source, but I spotted this early on.
"For the most part, what philosophers working on this issue have been hunting for is a feature of agency that is necessary for persons to be morally responsible for their conduct."
Is it not a problem when we start from a conclusion and work to fulfill its conditions? It looks to me as such: we want to hold people accountable, but we're smart people who believe in determinism, so we need to figure out a way to philosophy up a reason instead of accept a simple inconvenient truth. Is that not what's going on here?

 No.8

>>6
>Can you elaborate on what 'arbitrary influence' means here?
Unfortunately no! I'm not smart enough. The way I understand this idea is that if someone was coerced or manipulated into doing something that is not in accordance with their will, or caused by any other factor to do something, that they find to be not in their will, then that's the kind of influence we're talking about.

>If I'm a determinist, isn't my answer that there is no free will and that's just how things are? What problem am I not addressing? My worldview is logically consistent, as far as I'm seeing.
The problem is that in your everyday language you say things that are inconsistent with this. Therefore your foundational beliefs clash.

>Okay, let's imagine you are thinking about what to eat for breakfast. I pause the universe, and I examine every molecule and atom and all the chemicals in your brain and all the synapses, etc. And then, let's say I have a perfect understanding of how to interpret this state. Could I predict the next state from this one? Would I be able to tell which synapses would fire in which order in your brain to the point that I could tell before you what you would choose? If not, what is the piece of the puzzle I would be missing to be unable to do this? What's the secret sauce that creates free will as you define it?
Possibly yes but possibly no. This depends upon some unknowns about quantum mechanics. It is not clear whether quantum movements are completely random, or if they follow a detectable pattern.

But I've not come across clearly. Free will in compatibilism is defined not as being the ability to act otherwise than you might, but as the ability to act in accordance with what you will.

>Are you saying it is non functional because you believe in personal responsibility and you want to hold people accountable? I'm not seeing what makes compatibilist more functional than determinist. It feels to me to be a compromise that sacrifices the fundamental truth for the sake of convenience. Is there more or something different to it than that?
It's definitely different. The idea of responsibility is arbitrary. We decide where to place it. Compatibilists simply aknowledge that the responsibility we place on people is not based upon whether or not they were the source of the outcome, but rather on whether they were the mediator of the outcome.

So if everything can be traced back to a universal cause then we could blame everything that happens in the world on this. But this isn't very functional, because we want to live in a world were we can talk about making choices and attributing actions to people, so we instead believe in compatibilist free will, because it's more functional, but we could still accept this idea, that everything originated from a necesseary first cause.


>Is it not a problem when we start from a conclusion and work to fulfill its conditions? It looks to me as such: we want to hold people accountable, but we're smart people who believe in determinism, so we need to figure out a way to philosophy up a reason instead of accept a simple inconvenient truth. Is that not what's going on here?
First of all it's not clear that determinism is actually true. That's not been decided. The Stanford dictionary article seems to be written by a person who believes, the evidence points in another direction.

Secondly, it is in our biology and nature to hold people accountable. We do it naturally! We see the exact same kind of behavior in other primates, and even in most animals. You could even say, we are unable not to hold people accountable. It seems to be ingrained in the way we think.

So in order to make sense of the world, we have to explain, what this natural impulse of accountability means, and what it arises from. It does not seem to arise from an idea of strict causality, because it is not natural to humans to interpret the world in this way. It arises more from a feeling of something originating from an actor.

To sort of bring it back. You told your friend he should make a choice to try and be better. Choice to you in this context means, he should make up his mind, such that it becomes his will to try and be better.  This IS compatibilism. You're using the concept of choice not as a hard determinist, but as a compatibilist.

 No.9

>>8
>The problem is that in your everyday language you say things that are inconsistent with this. Therefore your foundational beliefs clash.
That's not a problem to me. For instance, I do not end every sentence with "I think and feel" even though every instance of me conveying something is a matter of my thoughts and feelings. I could say "green is undoubtedly the best color" and you would not get confused and ask where the source is to confirm that as fact. Right? And just as we can understand each other this way with casual language, I can also be a hard determinist and ask you to make a choice without extra explanatory information which conveys your lack of ultimate free will in the matter.

>Possibly yes but possibly no. This depends upon some unknowns about quantum mechanics. It is not clear whether quantum movements are completely random, or if they follow a detectable pattern.
>But I've not come across clearly. Free will in compatibilism is defined not as being the ability to act otherwise than you might, but as the ability to act in accordance with what you will.
This is the whole crux to me, which it seems you've sidestepped. I need to know what you believe is the answer. And, obviously we can't know the truth but if you don't have an answer to that question, I don't know how you can follow throw with anything else you believe regarding compatibilism. I mean, it follows that if you think of free will as I best understand the way by which you do, then you must believe that upon pausing the universe and analyzing the state, I could not determine with certainty what you would have for breakfast. Right?

>So if everything can be traced back to a universal cause then we could blame everything that happens in the world on this. But this isn't very functional, because we want to live in a world were we can talk about making choices and attributing actions to people, so we instead believe in compatibilist free will, because it's more functional, but we could still accept this idea, that everything originated from a necesseary first cause.
Are you saying we believe the same thing but you want to define terms differently for the semantic benefit of it? Or, do hard determinists and compatibilists fundamentally believe very different things? Like, how I see it, I can then be a hard determinist in my philosophical life because it's the way I view the world as functionally true, and speak in the terms a compatibilist would just so that I can actually communicate meaning with other humans. But, then it seems to me that compatibilism is trying to solve a problem that never needed solving, because we don't need to convey ideas perfectly, language is flexible. I can't be understanding this correctly, right? The foundation of compatibilism completely breaks my mind because I see determinism and free will as contradictory ideas based on the notion that I do believe I could predict what you choose for breakfast before you make that decision if I had the right equation and could plug in the right variables gathered from the state of the universe.

>>8
>it is in our biology and nature to hold people accountable. We do it naturally! We see the exact same kind of behavior in other primates, and even in most animals. You could even say, we are unable not to hold people accountable. It seems to be ingrained in the way we think.

>So in order to make sense of the world, we have to explain, what this natural impulse of accountability means, and what it arises from. It does not seem to arise from an idea of strict causality, because it is not natural to humans to interpret the world in this way. It arises more from a feeling of something originating from an actor.
My thing is that even if we naturally want to frame things this way, that doesn't make it right. Maybe it is counterintuitive to the way we and other primates want to organize the world, but that isn't actually useful evidence towards making a philosophical conclusion.

>You told your friend he should make a choice to try and be better. Choice to you in this context means, he should make up his mind, such that it becomes his will to try and be better.  This IS compatibilism. You're using the concept of choice not as a hard determinist, but as a compatibilist.
But that's not really what me telling him to be better was about. It was about my changing the set of inputs that go into the deterministic formula that produces an inevitable output. In effect, I don't think he has a choice ultimately, but I can still "sway" him because what I say is one of the inputs in the grand formula of fate.  And, it's easier to just say "choose to be better" because I can convey meaning to him easier that way than by saying "I'd like to alter you inputs in a way that may produce a more favorable result than if I did not alter your inputs, and so yadda yadda yadda..."

 No.10

>>9
>I can also be a hard determinist and ask you to make a choice without extra explanatory information which conveys your lack of ultimate free will in the matter.
This is possible yes

But it's very confusing, I think. If you're always going to articulate the idea of choice in a compatibilist way, you should just accept that the compatibilist way of defining choice has more utility.

>This is the whole crux to me, which it seems you've sidestepped. I need to know what you believe is the answer. And, obviously we can't know the truth but if you don't have an answer to that question, I don't know how you can follow throw with anything else you believe regarding compatibilism. I mean, it follows that if you think of free will as I best understand the way by which you do, then you must believe that upon pausing the universe and analyzing the state, I could not determine with certainty what you would have for breakfast. Right?
No! Not right. The compatibilist understanding of free will is compatible with causal determinism. That's why it's called compatibilist.

>Are you saying we believe the same thing but you want to define terms differently for the semantic benefit of it?
Yeah pretty much

>do compatibilists fundamentally believe very different things?
They believe very different things about the semantic meanings of the words responsibility, free will and choice, yes!

>But, then it seems to me that compatibilism is trying to solve a problem that never needed solving, because we don't need to convey ideas perfectly, language is flexible
Compatibilism allows moral theory to be compatible with determinism. So it's just semantic yes, but that semantic difference is useful because it allows for some other thoughts to be had, that could not otherwise be had.

>The foundation of compatibilism completely breaks my mind because I see determinism and free will as contradictory ideas based on the notion that I do believe I could predict what you choose for breakfast before you make that decision if I had the right equation and could plug in the right variables gathered from the state of the universe.
Yeah, so the only thing you'd need to be a compatibilist, is to completely abandom the idea of libertarian free will, and accept compatibilist free will.

You already don't beleive that libertarian free will exists. So it's not that far of a step.

>My thing is that even if we naturally want to frame things this way, that doesn't make it right. Maybe it is counterintuitive to the way we and other primates want to organize the world, but that isn't actually useful evidence towards making a philosophical conclusion.
It can be, if you want to build a moral system.

Nothing about compatibilism is inconsistent with a deterministic worldview, aside from the way free will and choice is defined, as far as I can tell. So it's no more or less right than determinism, because it's relation to an ontologically objective reality, is exactly the same, no matter what that reality might be.

>But that's not really what me telling him to be better was about. It was about my changing the set of inputs that go into the deterministic formula that produces an inevitable output. In effect, I don't think he has a choice ultimately, but I can still "sway" him because what I say is one of the inputs in the grand formula of fate.  And, it's easier to just say "choose to be better" because I can convey meaning to him easier that way than by saying "I'd like to alter you inputs in a way that may produce a more favorable result than if I did not alter your inputs, and so yadda yadda yadda..."
Yes exactly it's easier. So my argument would be, if your understanding of the underlying facts of the world does not differ from that of a compatibilist (you both believe in causal determinism), and the way you articulate yourself regarding choice and free will are always the same as how a compatibilist would define those things, how are you not a compatibilist?

 No.11

>>10
>But it's very confusing, I think. If you're always going to articulate the idea of choice in a compatibilist way, you should just accept that the compatibilist way of defining choice has more utility.
I can do no such thing, because I base my morality and belief system in hard determinism. So, I may borrow a compatibilist language to communicate more effectively in casual conversation, but that does in no way make me a compatibilist. I'm actually very unhappy with the concept of compatibilism, as it would appear to me as a way of having your cake and eating it too. They want to believe in determinism as I do, but they don't want to accept what that means for us and our moral systems.

>No! Not right. The compatibilist understanding of free will is compatible with causal determinism. That's why it's called compatibilist.
You haven't demonstrated that to me though. In fact, it appears you are deliberately avoiding answering my question directly regarding if I could predict what you would choose to eat for breakfast. I assume there's a reason, that you object to the premise of the question, or what answering it might conclude?

>So it's just semantic yes, but that semantic difference is useful because it allows for some other thoughts to be had, that could not otherwise be had.
But, what if I object to us having those thoughts? What if, just because it makes things easier, that doesn't make it more correct or better? I would say we are losing something important (not in casual conversation, but in deep moral philosophical discussion) by 'simplifying' things as it appears compatibilism does.

>>10
>Yeah, so the only thing you'd need to be a compatibilist, is to completely abandom the idea of libertarian free will, and accept compatibilist free will.

>You already don't beleive that libertarian free will exists. So it's not that far of a step.
It's a monumental step honestly. Because it breaks the foundation of everything I believe. Frankly, accepting compatibilist free will would mean I would have to be less loving and more angry. So, on a personal note, you can take my hard determinism from my cold dead hands~

On a more serious note, I do accept that the best idea should win regardless of how I feel about it, but I've yet to see a functional example of why I should define free will in a compatibilist way, outside of making casual conversation work smoothly.

>It can be, if you want to build a moral system.
Are you saying you base your morality in the way the natural world works? As in, because it is the way we naturally think, it is the way our instincts work, that that provides useful input into a morality framework?

>Yes exactly it's easier. So my argument would be, if your understanding of the underlying facts of the world does not differ from that of a compatibilist (you both believe in causal determinism), and the way you articulate yourself regarding choice and free will are always the same as how a compatibilist would define those things, how are you not a compatibilist?
But you are wrong, I don't always articulate myself this way regarding free will. I only do so when I'm having casual conversations, not philosophical ones. If you want to have a philosophical discussion about free will and what that implies for our morality systems, I would switch gears and start leaning heavily into hard determinism. Do you speak with friends only using your most academic and technically correct language?

I'd like to switch gears a bit here, and ask you, how does compatibilism benefit you when you are making moral decisions and are building a moral framework? What would you lose from a hard determinist worldview?

 No.12

>>11
>I can do no such thing, because I base my morality and belief system in hard determinism. So, I may borrow a compatibilist language to communicate more effectively in casual conversation, but that does in no way make me a compatibilist. I'm actually very unhappy with the concept of compatibilism, as it would appear to me as a way of having your cake and eating it too. They want to believe in determinism as I do, but they don't want to accept what that means for us and our moral systems.
I see what you mean

>In fact, it appears you are deliberately avoiding answering my question directly regarding if I could predict what you would choose to eat for breakfast. I assume there's a reason, that you object to the premise of the question, or what answering it might conclude?
If everything in the universe followed predictable laws, then yes, you would be able to predict what I would eat for breakfeast!

But this is an obvious assumption within determinism, which is itself the foundation for compatibilist thought. So this question and the answer are not really invalidating for compatibilism.

Yes, you can predict what I had for breakfeast, but I still chose to eat that thing for breakfeast if we define choice from a compatibilist perspective, since it was in accordance with my will.

>But, what if I object to us having those thoughts? What if, just because it makes things easier, that doesn't make it more correct or better? I would say we are losing something important (not in casual conversation, but in deep moral philosophical discussion) by 'simplifying' things as it appears compatibilism does.
Then you would have a very good point! And a strong criticism to compatibilism.

>On a more serious note, I do accept that the best idea should win regardless of how I feel about it, but I've yet to see a functional example of why I should define free will in a compatibilist way, outside of making casual conversation work smoothly.
Well we are currently using a compatibilist framework in society, or for most people, a libertarian principle of free will. This is the foundation of I think all moral theories. So really the burden of proof is kind of on you, to demonstrate that moral theories are not useful, since it seems very obvious to most people that they are, I think.

>Are you saying you base your morality in the way the natural world works? As in, because it is the way we naturally think, it is the way our instincts work, that that provides useful input into a morality framework?
kinda... yeah :c

We as humans have some intrinsic tendencies and those intrinsic tendencies are what shape our morality. We like being alive, we like feeling pleasure we dislike feeling pain and so forth. We beleive killing babies is categorically wrong and many other such things. These inherent tendencies are the basis of morality, and the wellspring from which moral thought emerges.

The fact that these are the things people value, make moral theories that account for those values more useful than moral theories that account for some other arbitrary set of values, that most people don't feel they want.

>But you are wrong, I don't always articulate myself this way regarding free will. I only do so when I'm having casual conversations, not philosophical ones. If you want to have a philosophical discussion about free will and what that implies for our morality systems, I would switch gears and start leaning heavily into hard determinism. Do you speak with friends only using your most academic and technically correct language?
No I guess I don't!

>'d like to switch gears a bit here, and ask you, how does compatibilism benefit you when you are making moral decisions and are building a moral framework? What would you lose from a hard determinist worldview?
Let's imagine I want to assert a simple moral imperative.

It's wrong to steal. Something like this. This is the rule I want to propose. Now, a hard determinist would say, well hold on a second. No one is actually responsible for stealing. It was the original first cause, which caused the stealing to occur, so whatever caused that, must have been what was at fault. Therefore, we can never chastise, punish or find people at fault for stealing, because we are misattributing blame from the first cause to the person stealing.

This is hugely problematic in an every day context, where if a person steals like, a large amount of money and really causes a lot of harm, we can't really blame them in any way, because they were led to this decision by their antecedent causes. Even though it is obviously wrong to steal.

In the same way, neither can we thank people for their kindness, because they did not personally commit the kindness, the kindness was caused by the initial first cause.

We can't credit people for art, because that's not something they did, they were simply influenced to do it.

On the other hand, compatibilism determines the freedom of a person's action, not from whether they could have done otherwise, but from whether or not they were allowed to act in accordance with their own will. So if someone steals, but they had to do it to save their kids or something and they then paid it back, we would say, clearly this person wasn't willing to steal, but they felt forced to do it, thus they did not act freely and are not accountable.

If an artist produces a work and it was their intention to do so, we would say they managed to make a beautiful piece of art.

If someone performed an act of kindness out of the goodness of their heart, we would say they were commendable for it.

So that's sort of the difference and what we lose, from how I see it.

 No.13

>>12
>Yes, you can predict what I had for breakfeast, but I still chose to eat that thing for breakfeast if we define choice from a compatibilist perspective, since it was in accordance with my will.
But this seems to me to be working counter to the truth for the sake of convenience. Does it not? The truth is, if your action could be predicted before you did it, you did not truly have a choice in the matter. From within the frame of reference in your mind, you deliberated, you processed the chemicals and synaptic feedback, and a course of action occurs. But, in no way can I see that as actual free will, because what you did in the moment was really governed by everything that has happened before that moment, not what happened in the moment.

>Well we are currently using a compatibilist framework in society, or for most people, a libertarian principle of free will. This is the foundation of I think all moral theories. So really the burden of proof is kind of on you, to demonstrate that moral theories are not useful, since it seems very obvious to most people that they are, I think.
Just because things are easier, just because they are the way in which things have been done, just because people (obviously~) like them  doesn't make them better. Correct?

>kinda... yeah :c
>We as humans have some intrinsic tendencies and those intrinsic tendencies are what shape our morality. We like being alive, we like feeling pleasure we dislike feeling pain and so forth. We beleive killing babies is categorically wrong and many other such things. These inherent tendencies are the basis of morality, and the wellspring from which moral thought emerges.
>The fact that these are the things people value, make moral theories that account for those values more useful than moral theories that account for some other arbitrary set of values, that most people don't feel they want.
So, maybe this is a separate discussion topic in and of itself, but I passionately and vehemently and very personally, with ever fiber of my being, disagree ❤️

>It's wrong to steal. Something like this. This is the rule I want to propose. Now, a hard determinist would say, well hold on a second. No one is actually responsible for stealing. It was the original first cause, which caused the stealing to occur, so whatever caused that, must have been what was at fault. Therefore, we can never chastise, punish or find people at fault for stealing, because we are misattributing blame from the first cause to the person stealing.
I'm actually really super okay with this! I believe that evil is a cycle. That evil was perpetrated against someone before they choose to perpetrate evil themselves (this is a simplification, so don't gut me over the details! I'd need pages to really dive into the details of this moralistic idea). So, I do not want to chastise or punish or blame someone who I think is a victim of circumstances. I think we agree that punishing victims is wrong (though not to strawman you, you may not agree with categorizing these people as victims). But! I acknowledge that we can't allow this person to steal without consequence. So, I would require the person be rehabilitated, to undo the damage caused to them by the evil previously perpetrated, through a loving and compassionate process, not a punitive one. And, maybe even a therapeutic process for the person who was stolen from, so that they too will not continue the cycle. These things may not be popular, or practical ideas, but they are the ones I believe will work if implemented.

>In the same way, neither can we thank people for their kindness, because they did not personally commit the kindness, the kindness was caused by the initial first cause.
I am okay with this too. Kindness does not require thanks. What might be nice though, is to convey to the person what their kindness meant and did, so that the person who did the kindness can be glad for doing it.

>We can't credit people for art, because that's not something they did, they were simply influenced to do it.
Eh, I mean, you're sorta right technically. But I think we can still validate a person for the effort it took for them to walk a predestined path. They practiced and put in the time commitment (whether through free will or not) and they experienced the thoughts and feelings that they did along the way. And we can commend them for that still, because from their frame of reference, it was still difficult to accomplish. We don't have to frame all things from a universal POV.

I dunno, the way by which I approach these things as a hard determinist seem consistent to me based on my understanding of determinism, and still give us leeway to validate good behaviors and correct bad behaviors. So, I feel like I lose nothing this way.

On the other hand, by adopting compatibilism, I feel like we lose the ability to reform victims (i.e. criminals) to the best of our ability. I think that we allow vindictiveness into our morality system. That what is best for the whole takes sideline to our need to satisfy vengeance through punishment. Do you disagree?

 No.14

>>12
>>13
>But this seems to me to be working counter to the truth for the sake of convenience. Does it not? The truth is, if your action could be predicted before you did it, you did not truly have a choice in the matter. From within the frame of reference in your mind, you deliberated, you processed the chemicals and synaptic feedback, and a course of action occurs. But, in no way can I see that as actual free will, because what you did in the moment was really governed by everything that has happened before that moment, not what happened in the moment.
Honestly, this is just semantics. Free will and choice, these are just words. We choose how we want to define them.

>Just because things are easier, just because they are the way in which things have been done, just because people (obviously~) like them  doesn't make them better. Correct?
well, if things are easier and people like them, that is one argument for them being better. You would have to prove somehow that moral theories overall are negative, I would say, to prove that a compatibilist free will does not have more utility in this regard.

>So, maybe this is a separate discussion topic in and of itself, but I passionately and vehemently and very personally, with ever fiber of my being, disagree ❤️
We should have that discussion sometime! Could be a good thread.

>I'm actually really super okay with this! I believe that evil is a cycle. That evil was perpetrated against someone before they choose to perpetrate evil themselves (this is a simplification, so don't gut me over the details! I'd need pages to really dive into the details of this moralistic idea). So, I do not want to chastise or punish or blame someone who I think is a victim of circumstances. I think we agree that punishing victims is wrong (though not to strawman you, you may not agree with categorizing these people as victims). But! I acknowledge that we can't allow this person to steal without consequence. So, I would require the person be rehabilitated, to undo the damage caused to them by the evil previously perpetrated, through a loving and compassionate process, not a punitive one. And, maybe even a therapeutic process for the person who was stolen from, so that they too will not continue the cycle. These things may not be popular, or practical ideas, but they are the ones I believe will work if implemented.
Okay, that makes sense.

What if regardless of what you do, a person seems incapable of change? Every time you let them out of your sight, they cheat when they can get away with it, steal when they know they won't be caught, and if they are, they take advantage of your compassion, slurp up your love, and in all manner of speaking, attempt to take as much as they can, and you don't know how to convince them not to.

Would you have an answer in this situation, where you have a hypothetical actor who just seems to be impossible to change?

>Eh, I mean, you're sorta right technically. But I think we can still validate a person for the effort it took for them to walk a predestined path. They practiced and put in the time commitment (whether through free will or not) and they experienced the thoughts and feelings that they did along the way. And we can commend them for that still, because from their frame of reference, it was still difficult to accomplish. We don't have to frame all things from a universal POV.
Aha! So the moment it's not about punishment, but about art, you wish to view actions as originating from people, and not as from a universal first cause. This could be considered inconsistent.

In fact, under hard determinism, you could even say they were unrightfully privileged, because they were simply granted better circumstances by the first cause. Their position is not to be commended in any way, it's actually somewhat unfair that they have this, while others do not. Or, what do you think?

>I am okay with this too. Kindness does not require thanks. What might be nice though, is to convey to the person what their kindness meant and did, so that the person who did the kindness can be glad for doing it.
This seems actually like a really good response. I agree that that seems pretty sufficient.

but... if as you say, we cannot assign responsibility to people, is there any reason for this person to feel happy that they did something good? If they do not own that action, why should it ever please them, that they did it? Rationally, it makes no sense, for her to feel satisfied that she did this, unless we are working either from libertarian free will or compatibilistic free will. So I think you kinda have to, maybe think about that too. Pride in accomplishment and satisfaction at a job well done, also seem to go out the window under hard determinism.

>On the other hand, by adopting compatibilism, I feel like we lose the ability to reform victims (i.e. criminals) to the best of our ability. I think that we allow vindictiveness into our morality system. That what is best for the whole takes sideline to our need to satisfy vengeance through punishment. Do you disagree?
I do disagree. I think we can recognize when rehabilitation is far more effective and prudent than punishment, and recognize forgiveness as an essential virtue to breaking cycles of violence.

We can understand, that an eye for an eye, when applied globally, yields eternal suffering for everyone, and learn when it's right to forgive and nurture, and when it's right to punish.

Systems can be entirely dedicated to rehabilitation of people that have done wrong, with comfortable environments and encouragement to be better being the rule and not the exception. Incompatibilism is not necesseary for us to see these as being the most pragmatic outcomes. We can recognize that someone performed an action, but also recognize that they did this in retaliation to something, or that they may be capable of unlearning the pattern of behavior that leads them to do things like this, and from that recognize that despite their fault, punitive action is not the correct response.

 No.15

>>14
>Honestly, this is just semantics. Free will and choice, these are just words. We choose how we want to define them.
And I guess I'm saying you've chosen an invalid way to define the words. Certainly you disagree, but I see what the compatibilist is doing akin to manipulation of the truth for the convenience of their worldview. Do you not see this as redefining words so that you're conclusion fits?

>>14
>well, if things are easier and people like them, that is one argument for them being better.
Could you not see how some pretty awful things could be justified that way?

>We should have that discussion sometime! Could be a good thread.
I will leave it to you to start it if you'd like to further delve into it.

>What if regardless of what you do, a person seems incapable of change? Every time you let them out of your sight, they cheat when they can get away with it, steal when they know they won't be caught, and if they are, they take advantage of your compassion, slurp up your love, and in all manner of speaking, attempt to take as much as they can, and you don't know how to convince them not to.

>Would you have an answer in this situation, where you have a hypothetical actor who just seems to be impossible to change?
This seems pretty straight forward, we restrict their ability to cause further harm. It's the same as with any other system, life under house arrest might be an example. That's unfortunate, that there is no path to reform, but we can prevent further harm while still providing a humane life to them.

>Aha! So the moment it's not about punishment, but about art, you wish to view actions as originating from people, and not as from a universal first cause. This could be considered inconsistent.
That's actually not what I am saying at all here.

Do recall, that I explained that in terms of morality and judgement and philosophy I'm strict about hard determinism. But I also explained that compatibilism is fine as a tool for interpersonal communication since it provides a simple model for understanding the complexity of life.

Think of it like this, hard determinism is like heavy machinery and compatibilism is like a hand tool. One works for big important construction tasks like laying out a foundation, and the other can be used effectively for the smaller less important bits. The right tool for the job, yes?

Don't assume you've caught me in a contradiction, just because I believe in practical solutions over perfect solutions.

>In fact, under hard determinism, you could even say they were unrightfully privileged, because they were simply granted better circumstances by the first cause. Their position is not to be commended in any way, it's actually somewhat unfair that they have this, while others do not. Or, what do you think?
I do believe in cosmic luck. I do believe they were given an advantageous position by the first cause. But, I also recognize that we don't motivate humans this way, as I'm sure you do. I feel like I've been constantly hammering in the point that frame of reference matters, but you are not acknowledging it for some reason. Am I not clear on this point I am trying to make?

>>14
>but... if as you say, we cannot assign responsibility to people, is there any reason for this person to feel happy that they did something good? If they do not own that action, why should it ever please them, that they did it? Rationally, it makes no sense, for her to feel satisfied that she did this, unless we are working either from libertarian free will or compatibilistic free will. So I think you kinda have to, maybe think about that too. Pride in accomplishment and satisfaction at a job well done, also seem to go out the window under hard determinism.
I guess I won't continue along this thread until you acknowledge what I mean by frame of reference. Do you know what I mean by that? Do you think what I mean by that is inconsistent with belief in hard determinism?

>>14
>I think we can recognize when rehabilitation is far more effective and prudent than punishment, and recognize forgiveness as an essential virtue to breaking cycles of violence.
Demonstrate for me a moment where punishment is better than rehabilitation, and what you think that punishment looks like. Because punishment is technically very vague a thing to promote.

 No.16

>>15
Okay, let's take a step back! I feel like I've been too gotcha-y trying to conform you to my view and not spent enough time demonstrating that I'm listening. So I'll sort of try to restate your ideas, and you can contradict any that you think I've gotten wrong.

You believe in hard determinism, in the sense that you think people have no free will

You define free will as being the ability to act in some other possible way than one does.

You reject the idea of compatibilist free will on the grounds that it feels instinctually and morally wrong, or that it is a deliberate attempt to manipulate language in order to suit a specific purpose.

You do, however, aknowledge that there is a lot of utility in formulating ideas related to free will from a compatibilist perspective in everyday language, and you do so yourself, as a rule.

By frame of reference, you mean that a person that acts, does so because of the antecedent causes of that act. As such it can all be accounted for by factors which can be determined beforehand.

Is any of this innacurate?

 No.17

>>16
I'm honestly not entirely sure I grasp the concept of compatibilist free will to draw real conclusions. Like I'm really giving it my best to comprehend but it seems like a contradiction in my mind. Let my try to define it as I'm best understanding.

So compatibilist free will is that we have an unbroken causal chain up to the moment I make a decision. And based on my state of being, it's technically feasible that I choose options ABC but definitely not XYZ. So, I'm still responsible for my choice between ABC. Am I grasping this right?

And my objection to that, if it's correct, is that I think inside my mind I might be deliberating between ABC - consciously or unconsciously - but in reality it's already a fact to the universe that I will go with B. No matter what. So, I don't see there to be free will in this equation, and to me, it feels like compatibilism is trying to inject free will into the process for benefits of being able to hold people accountable because that's what feels good to us. We like holding people accountable. But I'm only concerned with what I think is the most rational truth, not what I can gain from choosing a certain truth.

When I express myself to others, I'll take on the vantage point of compatibilism, because it would be prohibitive to my goals of communication to do other wise. But that doesn't mean I precisely believe the exact same as I express. Maybe I'm a habitual liar then! But I'm also able to connect with people and promote the right thoughts and feelings that I want in them. If I want them to feel proud for whatever reason, I can't really do that approaching things as a hard determinist. I'd probably just piss people off!

But when making moral choices and building a framework, I don't use shortcuts to understanding.

What I mean by frame of reference is that as limited humans, we can't perceive that we didn't choose. We feel a choice. And, from that frame of reference, inside our limited minds and not from the reference point of a God or the universe, we don't respond well to hard determinist ideas. I can promote good feelings in someone by being proud of their accomplishments, and I feel good and they feel good. And! While they maybe are predestined to be an awesome artist, they still experience from their limited frame of reference the struggle and effort to get there. Even if you imagine they were on rails on this path, they still experienced the journey and that is something that I can appreciate and praise them for in a genuine manner. Make sense?

 No.18

>>17
>What I mean by frame of reference is that as limited humans, we can't perceive that we didn't choose. We feel a choice. And, from that frame of reference, inside our limited minds and not from the reference point of a God or the universe, we don't respond well to hard determinist ideas.
Yeah, this makes sense. My question then I suppose, would be, why do you want to take the point of reference of god or the universe, when you in fact are a person?

>So compatibilist free will is that we have an unbroken causal chain up to the moment I make a decision. And based on my state of being, it's technically feasible that I choose options ABC but definitely not XYZ. So, I'm still responsible for my choice between ABC. Am I grasping this right?
This is sort of a way to explain compatibilist free will very succintly, but it's not the most accurate representation of it maybe for complex thought.

A more accurate representation is that free will is measured on a sliding scale. The more you're allowed to perform actions that are in accordance with your will, the more free you are. So if you have the option between ABC you're pretty free, because you can pick whichever you want the most. If ABC are available to you, but someone is pointing a gun at your head and saying if you take anything but C he will kill you, you're less free, since it seems your possible actions are very limited by an outside factor.

>When I express myself to others, I'll take on the vantage point of compatibilism, because it would be prohibitive to my goals of communication to do other wise. But that doesn't mean I precisely believe the exact same as I express. Maybe I'm a habitual liar then! But I'm also able to connect with people and promote the right thoughts and feelings that I want in them. If I want them to feel proud for whatever reason, I can't really do that approaching things as a hard determinist. I'd probably just piss people off!
Yes, this makes sense to me the way you're thinking about it.

>What I mean by frame of reference is that as limited humans, we can't perceive that we didn't choose. We feel a choice. And, from that frame of reference, inside our limited minds and not from the reference point of a God or the universe, we don't respond well to hard determinist ideas. I can promote good feelings in someone by being proud of their accomplishments, and I feel good and they feel good. And! While they maybe are predestined to be an awesome artist, they still experience from their limited frame of reference the struggle and effort to get there. Even if you imagine they were on rails on this path, they still experienced the journey and that is something that I can appreciate and praise them for in a genuine manner. Make sense?
I'm gonna come back to the idea of praise.

I don't believe, if you were a hard determinist, that you could praise them in a genuine manner. You could fabricate praise in a way, or you could feel viscerally that they deserved praise, or you could understand that they felt they deserved praise, but inside you would have a rational belief that they did not really earn anything. I suppose you could say 'wow that looks really good' and avoid stating anything incongruent with your belief. But you leave the subject to infer, that because it's good, they did a good job, which seems inconsistent. You're aware, that that's how it will be interpreted, and you're probably aware, that that's the interpretation that makes them feel good. So in a sense, it feels as if the praise is sort of dishonest or inconsistent if it comes from a hard determinist view? That's the way I understand it.

Honest or consistent praise in my mind, would be recognizing in your mind that someone has performed an act that is worthy of praise and then commending them for it.

 No.19

>>18
>Yeah, this makes sense. My question then I suppose, would be, why do you want to take the point of reference of god or the universe, when you in fact are a person?
That's a really excellent question! I think, I suppose, that I believe that morality should be tied to something. Some people anchor it to a god or religion. Not sure, but maybe in part you anchor it to something naturalistic. I would try to anchor it to properties of the universe. That this is the way the universe works for us, and such, I can write my moral code upon that. I assume that kind of thing is the foundation of most if not all moral systems, right? If I just said my own limited pov was adequate in determining a moral framework, I feel I would be excluding important truths and that my foundation wouldn't be so solid!

>A more accurate representation is that free will is measured on a sliding scale. The more you're allowed to perform actions that are in accordance with your will, the more free you are. So if you have the option between ABC you're pretty free, because you can pick whichever you want the most. If ABC are available to you, but someone is pointing a gun at your head and saying if you take anything but C he will kill you, you're less free, since it seems your possible actions are very limited by an outside factor.
Okay, makes sense. Though for the purposes of hard determinism, I don't need that case where a gun is involved. Even in the simplified version where you are as totally free as is possible to choose between ABC, I don't think that you were truly free to choose at all. I believe your choice could have been predetermined, that no factors exist in the physical realm that could randomize things or include a variability to give you more than the illusion of choice. I don't believe in a soul, as an example, that would give you special power to defy cause and effect.

Let me take this a step further. I wholeheartedly believe we are nothing more than more complex versions of the kinds of programs I write for my job. We have hardware (our body and tissues) and software (the state of the synapses in our brains, and maybe the chemicals too). And, we don't do what is outside our programming. Yes, we're very sophisticated! But we are still not much more significant than ants or viruses in our ability to make choice. They just do, as we just do. And we think we are special because we think and feel, but, it's just an illusion. But, at the same time, I can't deny the significance of feeling in my existence. That I'm programmed to care about such things. That I can't effectively defy what I am or how the universe works. So, I play the game of life. I try to generate the right chemicals in my brain, and I do the same for the people I love. I want all people, and animals, and everything, to only generate good chemicals and good sensations in their hardware. Because, that's how I'm programmed to feel.

Do you think, that's an okay way to feel about the world? I've always wondered, if it is okay to think that way.

>Honest or consistent praise in my mind, would be recognizing in your mind that someone has performed an act that is worthy of praise and then commending them for it.
I dunno, I guess, I feel like I'm consistent with my worldview while still being genuine. But, maybe I'm not, genuine in my praise and/or consistent. I'll think about it some more and try to get back with my thoughts.

 No.20

File: 1559724557125.png (232.82 KB, 886x576, 443:288, Udklip.PNG) ImgOps Google

>>19
>That's a really excellent question! I think, I suppose, that I believe that morality should be tied to something. Some people anchor it to a god or religion. Not sure, but maybe in part you anchor it to something naturalistic. I would try to anchor it to properties of the universe. That this is the way the universe works for us, and such, I can write my moral code upon that. I assume that kind of thing is the foundation of most if not all moral systems, right?
Maybe! I think it really depends on how you want to interpret this word foundation. No one would disagree that moral theory must in some regards adhere to the rules of nature and that we can’t ignore these when we try to construct morality. Generally speaking, though, everyone agrees that human morality must have its foundation in humans.

We can see this if we look at some of the strongest moral theories such as Kantian deontology, rule utilitarianism, value utilitarianism, and ethical egoism as described by Hobbes in Social Contract Theory. We'll notice this common trend regarding where they get their initiative to look for moral thought.

  Kant
Kant argues that there are such things as categorical imperatives. A categorical imperative is an objective, rationally necessary and unconditional principle that we must always follow despite any natural desires or inclinations we may have to the contrary. All immoral actions are irrational according to Kant.

How did he derive these imperatives? Every such principle that Kant uses to reason from, has the sole justification, that it is a principle which any rational human would agree is rational, upon due reflection. His moral theory thus takes it's wellspring in humans and their critical ability, and many believe he might have actually underestimated the extent to which humans disagree, but nonetheless his theory remains one of the most influential theories (Stanford Encyclopedia of Philosophy, 2016).


  Utilitarianism
All forms of utilitarianism seek to increase the amount of good stuff in the world. There are different variants, such as act utilitarianism, which judges each act individually or rule utilitarianism which, judges rules as being either good or bad, and then judges individuals from those rules (Internet Encyclopedia of Philosophy).

There are also different theories that differ on what good things even are. Such as hedonistic utilitarianism, or Value utilitarianism / preference utilitarianism. Hedonistic utilitarianism holds that we should maximize human happiness and minimize human suffering, or in some variants, all happiness and all suffering, with our actions. Preference or value utilitarians, reject this idea, on the grounds that some people experience wants and desires, that are not rooted in what makes them feel good, or they might even have preferences or value things, that make them feel bad. (Wikipedia)

So, these theories also take their wellspring in humans, but this time in their preferences, value, or propensity to feel pleasure and pain.

  Ethical Egoism
Ethical Egoism is a complex set of theories with one simple foundation. The idea behind any ethical egoist theory is that I morally ought to perform som action, if and only if, and because that action maximizes my self-interest. It therefore again takes it’s wellspring in humans, this time in whatever is in a person’s self-interest (Stanford Encyclopedia of Philosophy, 2019). You might think this moral theory is awful and leads to certain chaos, but prominent philosophers like Hobbes argued that everything necessary for building a functional society is contained within the theory of ethical egoism, if we apply it as a social contract. Hobbes reasoned to the conclusion that some things are in everyone’s self-interest, if everyone does it, and that it is therefore good to do these things. Hobbes managed to reach some astounding conclusions using these methods, such as it being good to submit to the authority of an absolute, undivided and unlimited, sovereign power, and ideas such as cooperation and agreement being good, even from an ethical egoists point of view (Stanford Encyclopedia of Philosophy, 2018).


> Okay, makes sense. Though for the purposes of hard determinism, I don't need that case where a gun is involved. Even in the simplified version where you are as totally free as is possible to choose between ABC, I don't think that you were truly free to choose at all. I believe your choice could have been predetermined, that no factors exist in the physical realm that could randomize things or include a variability to give you more than the illusion of choice. I don't believe in a soul, as an example, that would give you special power to defy cause and effect.
Yes, this is the foundation of hard determinism! And a different way of defining free will than how I do it.

> Let me take this a step further. I wholeheartedly believe we are nothing more than more complex versions of the kinds of programs I write for my job. We have hardware (our body and tissues) and software (the state of the synapses in our brains, and maybe the chemicals too). And, we don't do what is outside our programming. Yes, we're very sophisticated! But we are still not much more significant than ants or viruses in our ability to make choice. They just do, as we just do. And we think we are special because we think and feel, but, it's just an illusion. But, at the same time, I can't deny the significance of feeling in my existence. That I'm programmed to care about such things. That I can't effectively defy what I am or how the universe works. So, I play the game of life. I try to generate the right chemicals in my brain, and I do the same for the people I love. I want all people, and animals, and everything, to only generate good chemicals and good sensations in their hardware. Because, that's how I'm programmed to feel.

>Do you think, that's an okay way to feel about the world? I've always wondered, if it is okay to think that way

I think… it’s a fine way to think about the world. It’s the way I used to think about it, for sure. I also think, though, that within the context of our lives as they are right now, there are more meaningful ways to conceptualize what we are, and what the nature of free will is, that allow us to build moral theories, feel more truly free to choose, and be more genuine and consistent in our day to day lives. And that’s compatibilism, to me. You may think that's just a convenient word juggle, but what is language and understanding, if not a convenient way to interpret the world? Should we not define our experiences, in the ways that suit all of us best? If not, I would be interested to hear reasons for why we shouldn't.

Some papers, like pic related(Smilansky, 1994), try to argue that Hard Determinism can be moral, in the sense that it can lead to desirable outcomes, but they start from the belief that it cannot be, because that’s the rational starting point, given that without personal responsibility, it seems there is no point in any sort of moral theory. So essentially if you are a hard determinist, it seems you must be personally either amoral, or inconsistent in your beliefs, regardless of how moral you may appear to others.

That is not to say that hard determinists are all morally unattractive. No, I think rather that hard determinism has a certain attractive flavor to it, in the sense that it allows people to see past character flaws and natural inclinations and gives the possibility of valuing people equally even regardless of who they are and what they’ve done. But one must accept, that morality cannot be built on a hard determinist foundation, and that any judgement of goodness or fairness, comes from a compatibilist or libertarian intuition, which means that any judgement of a hard determinist actor's value, is inherently inconsistent with the base principle of hard determinism. I guess this means, it's really hard to argue for it's efficacy or usefulness, because the moment people accept it to be true, it ceases to be better in that person's mind, or introduces inconsistency.


  Smilansky, S. (1994). The ethical advantages of hard determinism. Philosophy and Phenomenological Research, 54(2), 355-363.

  Unknown Author (continually edited). Utilitarianism. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Utilitarianism on the 5th of June 2019

  Unknown Author (Jul 7, 2016). Kant's Moral Philosophy. Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/entries/kant-moral/ on the 5th of June 2019

  Stephen Nathanson (unknown date). Utilitarianism, Act and Rule. Internet Encyclopedia of Philosophy. Retrieved from: https://www.iep.utm.edu/util-a-r/#H2 on the 5th of June 2019

  Unknown Author (Apr 30, 2018). Hobbes’ Moral and Political Philosophy. Stanford Encyclopedia of Philosophy. Retrieved from: https://plato.stanford.edu/entries/hobbes-moral/ on the 5th of June 2019

  Unknow Author (Jan 15, 2019). Egoism. Stanford Encyclopedia of Philosophy. Retrieved from: https://plato.stanford.edu/entries/egoism/ on the 5th of June 2019

 No.21

>>20
>Kant
>Utilitarianism
>Ethical Egotism
Thanks for going through the details of these systems. Preference utilitarianism sounds interesting, at least as described in one sentence. I might look up more.

I do agree that we have to consider humans when making a moral framework, I suppose it would be silly of me not to. But, I am abundantly cautious when I do so. Because, humans want and like a lot of bad things! We aren't always a kind and loving species, we are destructive more often than not. So, to the fullest extent that I can, I would try to downplay human interest, human tendencies, human desires. It's a balance I suppose.

>I think… it’s a fine way to think about the world. It’s the way I used to think about it, for sure. I also think, though, that within the context of our lives as they are right now, there are more meaningful ways to conceptualize what we are, and what the nature of free will is, that allow us to build moral theories, feel more truly free to choose, and be more genuine and consistent in our day to day lives. And that’s compatibilism, to me. You may think that's just a convenient word juggle, but what is language and understanding, if not a convenient way to interpret the world? Should we not define our experiences, in the ways that suit all of us best? If not, I would be interested to hear reasons for why we shouldn't.
I understand the appeal! But, let me be real. I don't feel like I can just choose compatibilism. It feels contrary to how I understand the universe. It's like, a really awesome cake! Except I'm gluten intolerant, and vegan, and don't have mouth to eat it with. Like, even if it's a million times more convenient and enables all sorts of better and more sound forms of thought, I can't just choose it out of convenience. That wouldn't be true to myself. It needs to provide a better understanding of the universe, and while you may feel it does, I don't. Like, let me put it like this. If I believed in something along the lines of a soul, something that I was referring to before as some kind of 'secret sauce', then I would be comfortable with this idea. That there was something kind of force that made us more than just hardware and software. Can you provide me with an idea that is palatable along these lines? I'd be interested to hear what you got.

>So essentially if you are a hard determinist, it seems you must be personally either amoral, or inconsistent in your beliefs, regardless of how moral you may appear to others.
>But one must accept, that morality cannot be built on a hard determinist foundation, and that any judgement of goodness or fairness, comes from a compatibilist or libertarian intuition, which means that any judgement of a hard determinist actor's value, is inherently inconsistent with the base principle of hard determinism. I guess this means, it's really hard to argue for it's efficacy or usefulness, because the moment people accept it to be true, it ceases to be better in that person's mind, or introduces inconsistency.
I am certain that a lot of very smart people have come before me, have tried to solve the problem of morality from a hard determinist POV. I honestly don't see the issue, I can still imagine ways in which I seem pretty consistent while still having a foundation in humanism. But lets say for a moment that that is off the table, as you no doubt believe. What if, I try to derive a moral code based off an axiom like "all living things strive to propagate" or something akin to that? Then I could say "the species evolved in such and such way, which rewards cooperation, increasing the change of survival and further propagation etc etc". I'm just making this up on the fly, but what do you think of that? What's the obvious flaw that I'm missing?

>>19
>Honest or consistent praise in my mind, would be recognizing in your mind that someone has performed an act that is worthy of praise and then commending them for it.
<I dunno, I guess, I feel like I'm consistent with my worldview while still being genuine. But, maybe I'm not, genuine in my praise and/or consistent. I'll think about it some more and try to get back with my thoughts.
So, what makes something genuine anyway? What if, I tell someone I am proud of their accomplishments, but I actually feel nothing special about them? But, I still logically recognize that something existed in which there could be pride had? Do I need to feel in order to be genuine? What if, I feel really happy for someone's accomplishments, but I recognize as a hard determinist they didn't really have agency. Am I genuine if I tell them I'm happy for them? Do my thoughts and belief system make my feelings ingenuine? To be genuine, must your thoughts and feelings always have to be aligned? Because if so, I've been very disingenuous to a lot of people accidentally.

 No.22

>>21
I'm glad you found them interesting ^_^


>I do agree that we have to consider humans when making a moral framework, I suppose it would be silly of me not to. But, I am abundantly cautious when I do so. Because, humans want and like a lot of bad things! We aren't always a kind and loving species, we are destructive more often than not. So, to the fullest extent that I can, I would try to downplay human interest, human tendencies, human desires. It's a balance I suppose.
>humans want a lot of bad things
Bad by whos standards? I assume by yours, a humans, right? You see the problem, I hope. The only way that you can call something bad is by employing a human standard. It therefore seems no less likely, that all ethical theories must employ human standards to some extent. I definitely agree, though, they must also be very careful in how they do it.

>Like, let me put it like this. If I believed in something along the lines of a soul, something that I was referring to before as some kind of 'secret sauce', then I would be comfortable with this idea. That there was something kind of force that made us more than just hardware and software. Can you provide me with an idea that is palatable along these lines? I'd be interested to hear what you got.
Mmm, no, not really. I don't believe in that kind of thing. There are many philosophers that have tried to argue for this, but I don't believe, this is where compatibiism gets it's ideas from. There are emergent properties in this hardware and software as you sort of put it (I would just say physical matter, software anyways is just an expression of some properties of physical matter), which we can wonder at and find amazing, and which are themselves sort of secret and saucy, because they are so insanely complex as to be basically mystical, but other than this, I don't know what I could point to that would possibly convince you.

>I am certain that a lot of very smart people have come before me, have tried to solve the problem of morality from a hard determinist POV. I honestly don't see the issue, I can still imagine ways in which I seem pretty consistent while still having a foundation in humanism. But lets say for a moment that that is off the table, as you no doubt believe. What if, I try to derive a moral code based off an axiom like "all living things strive to propagate" or something akin to that? Then I could say "the species evolved in such and such way, which rewards cooperation, increasing the change of survival and further propagation etc etc". I'm just making this up on the fly, but what do you think of that? What's the obvious flaw that I'm missing?
Yes, so the problem here is that a hard determinist would not agree that they strive. Creatures do not strive to propegate, they simply do tend to behave in ways that lead to propegating, right. Because we don't assign will to them, we can't say they are striving either. It seems your axiom can only be rooted in compatibilist or libertarian understanding of choice. Not in hard determinist belief.

>So, what makes something genuine anyway? What if, I tell someone I am proud of their accomplishments, but I actually feel nothing special about them? But, I still logically recognize that something existed in which there could be pride had? Do I need to feel in order to be genuine? What if, I feel really happy for someone's accomplishments, but I recognize as a hard determinist they didn't really have agency. Am I genuine if I tell them I'm happy for them? Do my thoughts and belief system make my feelings ingenuine? To be genuine, must your thoughts and feelings always have to be aligned? Because if so, I've been very disingenuous to a lot of people accidentally.
I suppose we could measure genuineness on different levels. I like this questioning of what genuineness is. I think the word is fluffy.

We could say someone's expression of praise is genuine to their emotions. Or we could say that someone's expression of praise is genuine to their beliefs about a person. A hard determinst could never be genuine to their beliefs of a person, if they expressed praise, unless they were inconsistent in some of their beliefs. A compatibilist who believed someone had earned praise but who did not viscerally feel they did, would be genuine to their beliefs if they expressed praise, but would not be genuine to their emotions.

Does this sound like a fair way of thinking about it to you?

 No.23

>>22
>Bad by whos standards? I assume by yours, a humans, right? You see the problem, I hope. The only way that you can call something bad is by employing a human standard. It therefore seems no less likely, that all ethical theories must employ human standards to some extent. I definitely agree, though, they must also be very careful in how they do it.
So I'm just pointing out a flaw in this section, not arguing from a hard D perspective. Like I was saying, if I were to advocate morality as you do, rooted in something human, when you say "must employ human standards to some extent", I think "sure, well let's make that the minimal possible extent then" cause humans are naturally bad at being moral, if you ask me.

>There are emergent properties...
Do you just accept that on faith or actually have evidence? You are a causal determinist right, so what causes these emergent properties? I answer that question with certainty behind it, and that's what my hard D philosophy arises from. If you can answer the question the other way, with certainty, then my stance would become flaccid and I would really have no reason to insist on having this hard D philosophy.

As a sort of tangent, don't you find saying what you had in this paragraph to be as problematic as those who would say 'God works in mysterious ways'? It feels very hand wavey, too hand wavey for me to be able to accept it personally.

>Yes, so the problem here is that a hard determinist would not agree that they strive.
Okay okay, fine. Then, let's try this again as
^All living things are conditioned to propagate
I'm allowed to believe in evolution and selective pressure as a hard D, right? What's the issue with this one?

>We could say someone's expression of praise is genuine to their emotions
>Does this sound like a fair way of thinking about it to you?
Yep, I can live with this! I think people care more about how you feel about them than whether your philosophical belief system aligns with that feeling. I can still say, I am proud of you, be genuine in something internal to myself, and believe what I do. If anyone questions why I am proud of them, I will say "my feelings arise from an illogical sense that you worked so hard and did a good job, even though you're no more able to have chosen than a tree" but shorten that to "you worked so hard and did a good job" because the former is quite a mouthful, don't you agree? It's not like you want me to tell you about my hard D in every conversation we have right? ^_^

 No.24

>>23
>cause humans are naturally bad at being moral, if you ask me.
Is there any chance, do you think, that maybe humans are bad at being moral not because they're in tune with their preferences, but because they have certain biases and heuristics, which don't enable them to see when their biases are actually keeping them from acting morally, while they do believe that they are acting in accordance with their own principles of fairness and morality? I have a feeling that most people are well intentioned and work from good principles, but manage to reach conclusions that work against those same principles, by way of bias and heuristics.

My view is that the better informed of morality you are, and the better informed in general, the less you will make these errors, and the better the world will tend to be for everyone.

>Do you just accept that on faith or actually have evidence? You are a causal determinist right, so what causes these emergent properties? I answer that question with certainty behind it, and that's what my hard D philosophy arises from. If you can answer the question the other way, with certainty, then my stance would become flaccid and I would really have no reason to insist on having this hard D philosophy.
Unfortunately I have to handwave. I'm simply not well enough informed on this topic! Sorry!

>As a sort of tangent, don't you find saying what you had in this paragraph to be as problematic as those who would say 'God works in mysterious ways'? It feels very hand wavey, too hand wavey for me to be able to accept it personally.
Not really. I think it's extremely self-evident, that we don't understand everything about biological organisms and how they function like they do. Either these properties that we have like consciousness or empathy are emergent from the physical properties that we know, or there is some secret sauce, god or mysticality or something, that we don't know. I can't see what else it could be.

I suppose I should have included that it is possible that this other thing happens, and I'll concede that it's a bit problematic to assume, that these things must be emergent from physical properties.

>All living things are conditioned to propagate
Yes this is fine, but I don't think you can derive morality from this axiom without being inconsistent in your beliefs.

I am also conditioned to, when I look in the mirror, to maybe self-depricate a little. It does not follow from this, that self-deprecation is good, or something I value.

> If anyone questions why I am proud of them, I will say "my feelings arise from an illogical sense that you worked so hard and did a good job, even though you're no more able to have chosen than a tree" but shorten that to "you worked so hard and did a good job" because the former is quite a mouthful, don't you agree? It's not like you want me to tell you about my hard D in every conversation we have right?
Haha, yes, you are right. Idk, I still think most poeple would prefer you also thought of them as being worthy, rather than only feeling that they are. But feeling certainly goes a long way, and is very important, and probably enough for all practical purposes.

 No.25

>>24
>I have a feeling that most people are well intentioned and work from good principles, but manage to reach conclusions that work against those same principles, by way of bias and heuristics.
>My view is that the better informed of morality you are, and the better informed in general, the less you will make these errors, and the better the world will tend to be for everyone.
So, I could take this a few ways, but lets say I don't exactly disagree, but there are important caveats here. People are generally good, but they can't actually extend that past a certain point. Have you heard of Dunbar's number? I haven't read all the research and what the implication of it is, so forgive me for shooting from the hip here, but as I would understand it, we only have so many people we can really connect with and humanize. It makes sense, we developed as species in small communities, where the community was everything to our survival. We provide for our community, we humanize them. And other communities are nice, but if they are in the way, we are pretty good about not extending our morality towards them. It happens all over the world. We are terrible as a species at uniting, while we are great at creating segregations between us. We are naturally good towards those on the inside, but we don't extend that any further than need be.

Some people are better able to extend their goodness towards others. But I think that is not a natural human trait. I think we do that through education and a rational, less than human process of overcoming biases built into our system. But that education isn't correcting errors, it's more like, introducing errors with positive side effects!

>Yes this is fine, but I don't think you can derive morality from this axiom without being inconsistent in your beliefs.
As we say in America, "hold my beer".

>I am also conditioned to, when I look in the mirror, to maybe self-depricate a little. It does not follow from this, that self-deprecation is good, or something I value.
This is a separate conditioning, an immoral conditioning that has occurred, because it doesn't serve the axiom. That makes you a malfunctioning machine. I would call this immoral (maybe not of you, but in relation to you) based on the axiom. Because it is not serving any way for the species to propagate. But, we shall not punish you, because it isn't your fault. It is, though, a moral imperative that we recondition you to be more inline with the axiom. The most effective treatment, is compassion and caring to your emotional needs. A friend (well, anyone that could technically) would be morally obligated tend to you, in such a manner as to improve your self esteem and help maintain your emotional balance (both help serve the species in the axiom).

And! Something caused this fault, as we know there are causes to effects. Society maybe, culture maybe, something else that produced the malfunction must also be reconditioned. It would be our moral imperative to seek out why you malfunctioned and fix the source of it. If possible, we may even find why society/culture malfunctioned and fix that as well. While it may be nice to go back as far as the first cause and fix that, we are probably limited by time and space in that regard.

I think, since humans are best able to perform the function of propagation when loved and cared for, loving one another universally is a moral imperative. Policies that damage the environment may cause short term gains in propagation, it threatens the species in the long run, so it would be a moral imperative to take care of our Earth. I think I could run pretty far with this idea, if I had more time to play with it!

> I still think most poeple would prefer you also thought of them as being worthy, rather than only feeling that they are. But feeling certainly goes a long way, and is very important, and probably enough for all practical purposes.
This is far off the road, humour me if you will. You believe in the importance and incredible significance of genetics, right? What if your friend, who is genetically advantaged in something, does well in it. For instance, maybe they were genetically predisposed to be kinder or something. Would you have genuine praise of thought for their charity and kindness?

 No.26

>>25
> People are generally good, but they can't actually extend that past a certain point. Have you heard of Dunbar's number? I haven't read all the research and what the implication of it is, so forgive me for shooting from the hip here, but as I would understand it, we only have so many people we can really connect with and humanize. It makes sense, we developed as species in small communities, where the community was everything to our survival. We provide for our community, we humanize them. And other communities are nice, but if they are in the way, we are pretty good about not extending our morality towards them. It happens all over the world. We are terrible as a species at uniting, while we are great at creating segregations between us. We are naturally good towards those on the inside, but we don't extend that any further than need be.
>Some people are better able to extend their goodness towards others. But I think that is not a natural human trait. I think we do that through education and a rational, less than human process of overcoming biases built into our system. But that education isn't correcting errors, it's more like, introducing errors with positive side effects!
I'm kiiinda in agreement. I think sometimes you get a process more analagous to correcting errors than accounting for them. Otherwise I agree with everything you said.

>I think, since humans are best able to perform the function of propagation when loved and cared for, loving one another universally is a moral imperative. Policies that damage the environment may cause short term gains in propagation, it threatens the species in the long run, so it would be a moral imperative to take care of our Earth. I think I could run pretty far with this idea, if I had more time to play with it!
The problem is, you're not giving any reason as to why the tendency to propegate is more important to consider morally, than the tendency to self-depricate. I could make the exact same argument in reverse, but defining self-deprecation as a fundamental tendency that should be considered morally good and describing propegation as an immoral conditioning. I can do this, because you've given no reasons as to why one tendency should be distinct from another. Essentially, your theory isn't really consistent in it's reasoning, because if it considers one tendency worthy of protection simply because that tendency exists, it must consider EVERY tendency worthy of protection.

>This is far off the road, humour me if you will. You believe in the importance and incredible significance of genetics, right? What if your friend, who is genetically advantaged in something, does well in it. For instance, maybe they were genetically predisposed to be kinder or something. Would you have genuine praise of thought for their charity and kindness?
Yeah, I would. I think it's very commendable.

 No.27

>>26
>The problem is, you're not giving any reason as to why the tendency to propegate is more important to consider morally, than the tendency to self-depricate.

Do I have to? You said you had no issue with my axiom before, so I thought I was in clear to build out a logical argument from it. Isn't all morality a bit arbitrary like this anyway? I guess I've never tried to construct morality in such a strict way, so throw me a bone.

 No.28

>>27
Well so the thing is the axiom IS fine. In the sense that it's true that all things are conditioned to propegate. But just remember that whatever reasoning you use to arrive to your conclusions, it shouldn't be possible to also use that reasoning to arrive at conclusions you don't agree with. Otherwise, your moral theory will be inconsistent.

 No.29

>>28
But, all things aren't conditioned to self depricate. And my axiom doesn't say all conditioning is the right conditioning. There's a clear distinction here in that one conditioning serves the species and the other doesn't.

 No.30

>>29
Now we're getting somewhere.

Okay, then I would go with something like, all humans are in some sense conditioned to violence, since we have anger and indignation as instinctual emotional responses to injustice and other things like this, and these emotional states promote violent action.

Therefore, violence is justified, if a person feels angry or indignated.

 No.31

>>30
Okay yes! There was once a time that this conditioning did serve the species. It was a morally correct conditioning! But, our species is not as it once was fighting in the jungles and savanna and forests for survival. The fact of our current world is one in which scarcity of resource is optional to the species and our old condition is no longer moral and is in fact detrimental. We must continue to evolve towards a modern condition which doesn't include anger. In this way, I think I maintain consistency.

 No.32

>>31
Mmmm, not really.

So as I was pointing out originially, the axiom fell apart. So what's happening now is you're reasoning from intuition. You're no longer claiming that everything we're conditioned to do is moral, it's not clear, where you're getting the backing for your normative claims. You say we must continue to evolve, but why must we? What moral principle, informs this view?

 No.33

>>32
Hmm, you aren't making this easy on me!

I dunno, like I guess in the end, I probably can't out philosophize those that have come before me. But, what I can do, is make my own moral code based on whatever criteria I want. And, that's gonna be some kind of hybrid hard determinism which still allows me to get away with utilizing human values and tendencies. It's okay if you think my beliefs are inconsistent! In the end, what matters most to me is love, charity, tolerance, forgiveness, kindness, etc. And I don't need any defined, consistent system to reinforce in me that these things are right and good. You could say, to me, they are self apparent c:

 No.34

>>33
I guess that wraps us up, then?

I think that sounds like a pragmatic and rewarding way to think about the world. Obviously, none of us are 100% consistent in our reasoning and emotions, that's probably just impossible given the nature of all that is unexplored. But I guess we both furthered our understanding a bit on these topics. I think this is something worthwile too.

Thank you for the discussion <3

 No.35

The book I've been reading uses very different terms and I don't think I'd be able to summarize at all, but a complication of determinism, materialism, and similar as being somewhat deficient for describing key phenomenon.

This thread clearly goes into a lot more detail so I'll need time to read through carefully.

 No.36

the problem with determinism is that the only way i could say to truly determine it's actuality is be able to see it in time, both past and present. sort of a laplace's demon situation

also, i just wanted to see what it does to my name

 No.43

>>4
My personal stance is that "free will" doesn't really matter, as it's an impractical concept.
Same deal for determinism, I guess.
Basically, I'm of the stance that what is, will be. No point in dwelling on it.

 No.50

Free will is a self-contradicting concept. I'm doubtful anyone has ever come up with a cohesive definition of free will that isn't either self-refuting or tautological. It's meaningless as far as I'm aware.

 No.148

>>50
>I'm doubtful anyone has ever come up with a cohesive definition of free will that isn't either self-refuting or tautological.
Isn't that a tautology?

 No.149

>>148
No, it could be contingent on actual facts about the universe.

 No.197

>>149
>>50
If this is what it takes for you to consider something meaningless, then aren't an insane number of concepts completely meaningless to you?

A human is often defined simply as that which is human, or has human-like characteristics. Or it's a member of the human race. When is something a member of the human race? It just is?

Does that mean this conception of humanity is meaningless to you as well?

Art could be defined as something created with creative imagination, and often with an intent to please others. This is clearly self-refuting, because this makes a chair, a novel hand gesticulation, and a set of weights in a weightlifting set art. No matter how you define art, it will include some things, which we don't consider art, or exclude some things which we do?

Does this mean this conceptualization of art is meaningless to you?

I feel like, at some point, you have to accept that many of the words we use, do not meet these high standards you have, and that that doesn't make those words invalid or meaningless.

 No.198

>>197

"Art" and "humanity" are easy examples of words that are meaningless, or at least words whose definition is very soft and subjective.  What people consider to be art is going to vary from culture to culture, and even between individual persons.  Humanity's definition can also shift to include dogs and cats, or disinclude "the barbarians".  "Free will" is also a fairly soft definition, and could include or dismiss situations like doing something while held at gunpoint.  Legally, you probably wouldn't be considered to exercise free will in that situation, but philosophically one could argue that you were still in control of your body and could've chosen to be shot.

It's true that in everyday conversation these words are still usable, if you mention art or humanity to someone in your vicinity you're likely to at least get close to similar concepts in their head.  But philosophically and logically things do require a much more concrete definition, otherwise they can't really slot into the rest of a discussion.  Without determining precisely what "free will" is, how can we determine whether or not it exists?

 No.214

>>198
Oh, are you a new person or the same poster?

>"Art" and "humanity" are easy examples of words that are meaningless, or at least words whose definition is very soft and subjective.
Well, yes, this is why I picked them, of course ^_^. If a person wants to make a claim that any word which has a definition that is not clearly contingent on an underyling ontologically objective reality is meaningless, and wants that claim to be respected, they then have to contend with the obvious examples of words which most consider meaningful but aren't determined this way, and explain why those aren't meaningful.

What would it take in order for a concept to be considered meaningful in your world-view?

>Legally, you probably wouldn't be considered to exercise free will in that situation, but philosophically one could argue that you were still in control of your body and could've chosen to be shot.
One could argue, yes. But any time one could argue something, that doesn't not make every other thing that could be argued a meaningless idea, right? Or what do you think?

>Without determining precisely what "free will" is, how can we determine whether or not it exists?
Well, whether or not it does exist is in part very contingent on how we define it, as well as an ontologically objective reality. There are different schools of thought as to how it should be defined, rooted in different rational or pragmatic justifications. Most theories try to be congruent with what most people feel or believe, free will is, or lay out a justification based on how free will would work in a pragmatic context. It's a pragmatic and rational concern, which can be evaluated, for all the reasons outlined in the discussions above.

I would contend that every prominent definition of free will, contributes to philosophical discourse on the topics of morality and metaphysics in a constructive way, in light of the ways that it allows us to think of those topics, and gives us perspectives to evaluate idas within them from. They are all a part of, 'good' or 'useful' frameworks, in the sense that said frameworks are disputable, debatable, transparent in the ways that can be reasoned to the disparate definitions of free will.

 No.246

>>214

>Well, whether or not it does exist is in part very contingent on how we define it

I believe that's exactly the point. Without defining what it actually is that we're talking about, the discussion seems doomed to be pointless. It's like the question was "what properties does zorgblarg have? Is it even real?", and then everybody had opinions and long disagreements about it. And then, after those already happened and keep happening, a new person came along and asked "wait, what's zorgblarg even?" and got a shrug in response. "It's whatever suits your rhetorical position in the zorgblarg discussion".

I'm on the "the concept is malformed from the ground up" side. I've also yet to hear a definition which leaves anything to be discussed afterwards.

Also, I don't think "art" is meaningless, since it can actually be defined with reference to the culture that produced or experienced it. Very roughly and in extreme short, "art is what we agree to be art" is the definition and it's fine, since it's about a social construct. This is to be contrasted with "free will", which is supposed to be an actual objective property. "Does the rat in the maze have free will? Let's have an annual vote about it - whatever option wins is true until the next year, when we'll vote again"

 No.248

>>246
>I believe that's exactly the point. Without defining what it actually is that we're talking about, the discussion seems doomed to be pointless.
Two disparate definitions of free will were laid out in the very first post of this thread, and those were the ones being discussed. If the point is that discussion of free will without defining free will is pointless, then this is an awful point, considering that there are two operating definitions in use, and it costs nothing to introduce a new one if one wills.

The idea I was replying to, was the idea that 'every' definition of free will is inherently meaningless to this person.

>This is to be contrasted with "free will", which is supposed to be an actual objective property.
It's not supposed to be an objective property? I think no one with any sort of bacground knowledge on this area would think this? People that define free will attempt to make it an epistemically objective quality, in that if one knows the definitions one should be able to make a determination of whether or not something is fre, but it's clearly a socially constructed idea, no matter how you slice it. It's a determination we make about acting agents, something we ascribe to them, not a property of their physical matter.

>"Does the rat in the maze have free will? Let's have an annual vote about it - whatever option wins is true until the next year, when we'll vote again"
This is an absurd characterization of what philosophical discussion around the concept of free will looks like. Just because we socially construct something, does not mean we don't base our choices an understanding of reality, and that this construction can't be disputed through rational means.

 No.250

>>248

The first post doesn't have any definition of free will, much less two of them. It has definitions of two philosophical stances related to free will, which isn't the same thing at all.

>It's clearly a socially constructed idea, no matter how you slice it.

Untrue. That is unless you subscribe to a form of the idea that everything is a social construct, in which case that's an underlying issue which would need to be handled first. I can and do slice it as "it's not socially constructed", with a side of "otherwise it's useless as an idea". Free will is supposed to be about* the ability of agents, any agents, to make choices. The true answer to whether they have it and whether it's possible to have it at all shouldn't be cultural.

And yes, it is intended to be absurd. It is not an illustration of what the discussion looks like though, but rather a perfectly viable outcome if it's just a social construct.

 No.251

>>250
> It has definitions of two philosophical stances related to free will, which isn't the same thing at all.
An operational compatibilist definition was presented

>personal responsibility and choice, do not include the ability to make other actions than you necessearily will.
>[Compatibilists] hold, that because free will is often taken to be the ability to act freely from arbitrary influence, we should not think of free will and determinism as being incompatible
Free will is the ability to act freely from arbitrary influence.

Libertarian free will is so commonly understood as to be the default view, and I thought it didn't need explaining, what the definition attached to this set of theories was, but a definition was present in the sources. There is here simply a demand for an ability to act otherwise than one will or could have done. Free will is in a libertarian model, the ability to act otherwise than one will or could have done. This was the definition that we used for the rest of the discussion.

>Untrue. That is unless you subscribe to a form of the idea that everything is a social construct, in which case that's an underlying issue which would need to be handled first. I can and do slice it as "it's not socially constructed", with a side of "otherwise it's useless as an idea". Free will is supposed to be about* the ability of agents, any agents, to make choices. The true answer to whether they have it and whether it's possible to have it at all shouldn't be cultural.
Well certainly every concept we have in language is constructed, socially or not. There's no denying this.

If free will is not a social construction, explain to me the criteria that in your head seperate a socially constructed concept from one that is not socially constructed, so that we can be on the same page.

>And yes, it is intended to be absurd. It is not an illustration of what the discussion looks like though, but rather a perfectly viable outcome if it's just a social construct.
No it's not? No one talks about things this way. We don't just take votes about whether or not something is or isn't something. We use reason, argumentation and a multitude of other complex processes to determine what we think is the truth.

You're bringing out this absurd example saying, if this concept is socially constructed, then why don't we just vote about it, and decide what the truth is that way. That's just as good, as any other method. The obvious answer to this, is that no one in philosophy just votes, you use these better methods to determine knowledge, and that doesn't make it any less socially constructed, but it's very different from your characterization of the proces.

 No.254

>>251

"Often taken" is not a definition. The ♥ symbol is "often taken" to mean love, but "love" won't help you when you see these: https://www.vectorstock.com/royalty-free-vector/cardiology-medicine-and-cardiac-surgery-symbol-vector-21150366

A bigger point though is that referring to those models as if they had definitions in them renders all but the most shallow discussion moot. Free will is true in model X, it's false in model Y, and that's all there is to it. At that point we're reduced to a question of personal taste. "I find this more appealing, so I'll align myself with model X, so free will exists to me" (this is not a description of what you're doing but an illustration to consider, as was the rat-in-maze-vote earlier which seems to have also been misunderstood, prompting this note and the fuller explaination below). Hardly a strong refutation against those denying there being much worth to the concept, but interestingly enough perfectly in line with my original "I've also yet to hear a definition which leaves anything to be discussed afterwards".

>There's no denying this.

As also there's no denying this is sophistry - this rabbit hole leads all the way down to "truth itself is a social construct", at which point we might as well stop as we'll be leaving the realms of "worthwhile" altogether.

Anyway, back on track: "seperate a socially constructed concept from one that is not socially constructed". If something is a concept, it's by necessity socially constructed, and "not socially constructed concept" is contradictory, which is why I'd never proposed it in the first place. The distinction is in what's described by the concepts, not in what we're using to do the description. The below is relevant here.

>if this concept is socially constructed, then why don't we just vote about it, and decide what the truth is that way

That is not what I'm doing. I'm not suggesting this as an actual path to follow. I'm saying this approach is perfectly compatible with it being purely a social construct. Think of it as a thought experiment if that helps. Substitute the "does rat in the maze have free will" in this vote with a fact about our physical reality, like "does water freeze at 0*C" (all the obvious caveats apply). The absurd thing in that vote isn't that truth is determined by voting. It's that we're acting as if it is, yet with a vote of "no" it's still true. On the other hand, if everyone agreed that something's art... it's art. That's fine. If they agree it's not, it's not. Also fine. And now put the original question back in. Are we now like in the former case (0*C water freezing), or like the latter case (X is art)?

Note that this goes beyond just baseline properties of matter. "Is the sky blue" (obvious caveats apply)? Well, "blue" is a color, a purely a social construct that fuzzily describes wavelengths of light and in fact different cultures have different color sets. Doesn't matter, it'll still be blue even if the vote goes "no".

 No.258

>A bigger point though is that referring to those models as if they had definitions in them renders all but the most shallow discussion moot.
It does not, as is evidenced by the indepth discussion that has already taken place in this thread, and as continues to take place between philosophers that do define free will. Arguments like the consequence argument are continually debated to assess the merit of compatibilist definitions of free will.

>Free will is true in model X, it's false in model Y, and that's all there is to it.
Do you disagree? do you not think free will is true in compatibilist determinism, and false in hard determinism? Because that's actually the way the theories work.

>Hardly a strong refutation against those denying there being much worth to the concept, but interestingly enough perfectly in line with my original "I've also yet to hear a definition which leaves anything to be discussed afterwards".
You're right, it's not a strong refutation. But you're also wrong, it's not strong support for your original idea.

Socially constructed concepts are not evaluated on a wholly subjective basis. I don't know where you get this idea, but you have to strike it from your mind, because it's completely false.

We know that every concept is socially constructed. You agree with this. If any concept can be argued for or against in any way, it follows that we can have discussions of socially constructed concepts.

>As also there's no denying this is sophistry - this rabbit hole leads all the way down to "truth itself is a social construct", at which point we might as well stop as we'll be leaving the realms of "worthwhile" altogether.
It's not sophistry it's just obvious. Concepts are socially constructed. You even agree with this in the paragraph below.

>I'm saying this approach is perfectly compatible with it being purely a social construct
Explain the difference that you imagine there being between something which is 'purely a social construct' and a 'concept which is socially constructed'

>That is not what I'm doing. I'm not suggesting this as an actual path to follow. I'm saying this approach is perfectly compatible with it being purely a social construct.
This idea of something being purely a social construct, as opposed to something being only partially a social construct, this isn't accurate.

Every concept is the same degree of constructed, wich is to say, entirely.

The freezing point of water is as much of a social construct as something being art. We say that water freezes at 0 degrees, but the truth is that there's a sliding scale of organization of water molecules in a crystaline structure, that starts before the water ever hits zero, and continues after, which is also influenced by atmospheric pressure, and the amount of solubles in the water, as well as other factors.

Water does not freeze at zero, but water's freezing point is defined as being zero.

In the same way that this is the case, there are physical properties in a person's brain which determine whether they consider something art or not. If our socially constructed definition of art, is that enough people have to agree that it's art, then it's contingent on the physical matter in people's brains, and therefore connected to an underlying reality, in the same way that the freezing point of water is.

>Well, "blue" is a color, a purely a social construct that fuzzily describes wavelengths of light and in fact different cultures have different color sets. Doesn't matter, it'll still be blue even if the vote goes "no".
This line of reasoning draws an abitrary distinction between the properties of the physical matter in people's brains and the properties of waves of light.

 No.287

>>258

I'm having flashbacks to phpbb days of ever expanding quote stacks. I know from experience exactly where that leads and keep trying to reign that in, you keep trying to expand it out.

I also see you're rather determined in misunderstanding and/or misrepresenting what I'm saying. I can take once again many a cheap snipe at that (two examples: "But you're also wrong, it's not strong support for your original idea" - care to point out where I claimed it was? Or the "Do you disagree?" bit, to which the answer is 无, quite legitimately), but really, what's the point. Instead maybe let's try something different, narrow the focus while switching gears altogether. Maybe that'll let us work out the underlying issue.

So here's a simple question to hopefully let us do that: "does phlogiston exist?" (why, why not)

 No.305

>>287
>>287
>I'm having flashbacks to phpbb days of ever expanding quote stacks. I know from experience exactly where that leads and keep trying to reign that in, you keep trying to expand it out.
I'm not trying to do anything other than reply to you.

>I also see you're rather determined in misunderstanding and/or misrepresenting what I'm saying.
You have some kind of misunderstanding of the terminology of social constructionism, and the ways these theories work, I think. I'm not determined to misunderstand or misrepresent you, but often when you say social constructive, or objective, or something like this, it seems you mean something, but that something is divorced from the actual meaning of the word. I might be able to intuit what your actual intentions with the idea are, but I'd rather read your words literally, than have to do guesswork like that.

>two examples
I will adress these, I guess

>"But you're also wrong, it's not strong support for your original idea" - care to point out where I claimed it was
You said the way the models worked was 'perfectly in line' with your original claim. I took this to mean you were outlining your argument in support of the original claim, which I disagreed with, on the grounds that your representation of the ways the discussion was carried out was flawed.

>Or the "Do you disagree?" bit, to which the answer is 无, quite legitimately
Just say no...
You shouldn't bring a chinese character into an english discussion. I'm going to assume google translate is accurate and this just means no.

The reason I asked, is because you seemed to think that any attempt to define free will was an inherently incongruent way of viewing the world. I wasn't sure, if you thought the theories were necessearily wrong, because they had different takes.



But anyways, yes, let's try to narrow the focus a bit.

Phlogiston exists as a social construct, but it's been demonstrated as being a bad representation of reality, a flawed model so to speak with less utility and accuracy than later models, therefore it is no longer in use.

There is not likely to be a single homogenous substance that we can term as phlogiston in the ontologically objective reality, in the way that Johann Joachim Becher proposed it.

 No.311

>>305

Your assumption about 无 isn't fully correct. It's actually a pretty useful thing, read as "mu" (well, technically that one's "wú" in Mandarin but that's a whole other thing) and originating with the Mu Koan. Used in that manner it's the sort of "no" which says "no" to the assumptions underlying the yes/no question presented. I grant it's not in common use, but with just how useful it is? It should be. Also, let's not forget here it was explicitly listed as an example of the cheap snipes made possible rather than as a real answer.

Setting aside all the fluff though, back to the narrowed focus. So, what you're saying is essentially that the question itself is wrong (hey look, it's a 无 answer), since phlogiston is just a construct that's part of a model, and it's all about the models and how well they work out. Do correct this if I got it wrong.

Okay, let's grant that and then look at the second statement. Why is it relevant how it was originally proposed? It was proposed within a model, and you seem to be of the idea that definitions are to be strictly and purely linked to their models. So, within that model, it exists, and that's all there is to it. Without that model, it's not even a concept. It seems to me then that saying anything about it makes no sense (including whether it's likely to be applicable to a real substance or not), as we're outside of that model. Did I get this right?

 No.317

>>311
>>311
>Setting aside all the fluff though, back to the narrowed focus. So, what you're saying is essentially that the question itself is wrong (hey look, it's a 无 answer), since phlogiston is just a construct that's part of a model, and it's all about the models and how well they work out. Do correct this if I got it wrong.
This is very close to how I think about it!

I would say it was not a sufficiently clarified question. This because there are at least two possible questions contained within the question.

Does the social construction phlogiston exist

and

Does a form of matter, with physical properties matching the ones described by the social construct phlogiston, exist in an ontologically objective reality.

Without clarification, it is impossible to tell which is meant.

>since phlogiston is just a construct that's part of a model, and it's all about the models and how well they work out.
Yeah, pretty much!

Well it's not relevant how it was originally proposed, except to say that this original proposition is how phlogiston is still defined today. If Phlogiston had been redefined by someone to better match reality as we know it, we would say it's likely that phlogiston exists.

>So, within that model, it exists, and that's all there is to it.
Yes, but I have to stipulate onto this, that you have to believe that the model is accurate. If you do not believe the model is accurate, there would be no reason to believe that phlogiston existed.

>Without that model, it's not even a concept
Correct

This was a better criticism, I would say! A more informed one.

The models of free will are evaluated in the same to the model of phlogiston, but the outcome of the evaluation is different because they actually attempt to correspond with what we know about reality in contemporary philosophy. The models are continually updated to match our understanding of the world. This is why, if we addopt the lenses of these models, even considering the context of our modern world, the conclusions of the models still hold up, as far as we are informed.

The conclusions drawn in physics and metaphysical sciences, are hastily implemented in the models, as our understanding of the world grows. Right now, it's looking like both determinism and compatibilism are false, because of some intricacies of quantum mechanics, but if we make the assumption that events are causally determined despite of this suggestion, which is not outside of the realm of possibility, the models hold up. The models are probably completely internally consistent, but we only accept accept their truthfulness, if we also accept the assumptions they are predicated on.

 No.361

>>317

Quantum mechanics. I don't usually like quantum mechanics being brought into this, since they tend to get god-of-the-gaps-y. The quirk they have of the difficulty at maintaining that everything is deterministic at the tiny scales is actually often pointed to as "see, the nondeterministic behaviours is how we are making choices that aren't just an automatic next-step consequence of the state of the universe", a common favourite. Of course though there's also the option that they could just be completely random and so there's no "choices" as such. These aren't the only possibilities, either. And so, nothing actually gets solved in the free will question.

>Yes, but you I have to stipulate onto this, that you have to believe that the model is accurate. If you do not believe the model is accurate, there would be no reason to believe that phlogiston existed.

Why should what I believe be at all relevant? I don't think it should affect what's true and what's false, other than the obvious caveats like statements in some way about my beliefs themselves.

The only way it makes sense to compare the "same" thing between different models is if they share a certain basic common ground. It could be a common definition. With your approach, there seems to be nothing stopping me right now, sitting outside the phlogiston theory of combustion, from defining phlogiston as methane and adding that to our current reigning model. Methane exists there, and it even has many shared properties with the original phlogiston. And so, phlogiston now exists once again. Though I could also define it again as "the gas contained in all combustible substances", so then it doesn't. We now have four models, in two of which it exists, one in which it doesn't, and one in which the question is ill-formed. "Does phlogiston exist?" - well, pick the model you like by whatever criteria you like and have your answer instantly. The question just isn't interesting, leaving no room for reasoning - just look at the model you're using and you have your answer. The relation of this to the free will question should be clear - does it exist? Well, whichever model you pick of those on offer you'll have an answer and that will be that. But it's also pretty meaningless as a term, we might as well have that model call it anything else at all ("teacupicity", why not) since it bears no relation to anything outside it. It simply can't, they don't share definitions.

Another commonality possible between models is if there's the same thing trying to, well, model. There's something external to each of them that the models are trying to describe. We can very easily think of a model in which a person is defined as a motile cuboid (there's many reasons to do that, mostly to simplify the math as we try to approximate how many people will fit in a space or the effect of weather or whatever other thing). Is that a good definition? Well, the answer to that actually rather depends on the sort of questions we want answered, but those questions are really about people, who exist independent of that model, and real phenomena which also happen to them independent of the model. They are really what we want to know about, not glorified 3d rectangles or dots or whatever the current model in use defines people as. The relevance to the free will question is this - is there an actual common underlying external phenomenon (that may or may not occur) we're trying to model with those different models when they use the same term of "free will"? I'd say there is. Though the ironic thing, after writing all of that is I have a stance of "I just don't see how an answer of whether free will is real or not matters either way".

 No.362

>>361
I agree regarding quantum mechanics, it's usually used to support pretty reductive takes. The only thing bringing quantum mechanics up really highlights, for most discussions is what we have left to determine, and therefore where the uncertainty in whatever model you're discussing lies.

Well the reason why I say believe, is because I'm sort of the opinion that absolutely certain knowledge is impossible. But I should probably put that aside and say, the model is accurate, if it corrosponds to reality, and that's a more simple way of conveying what I mean. Practically we may never be certain if it corrosponds or not, so the best we can do is believe based on a thorough examination of the evidence.

> With your approach, there seems to be nothing stopping me right now, sitting outside the phlogiston theory of combustion, from defining phlogiston as methane and adding that to our current reigning model.
You're correct, nothing stops you, but absolutely no one would take you seriously, also, and your concept phlogiston would have no practical applicability or usefulness in a broader context, making it overtly useless knowledge.

>The relation of this to the free will question should be clear - does it exist? Well, whichever model you pick of those on offer you'll have an answer and that will be that.
Does free will exist in sort of the ontologically objective sense, is not the question anyone was interested in answering in this thread. We were interested in understanding what the disparate conceptions of free will were, and what their practical utility was. Regarding the question of free will, we assumed determinism throughout most of the thread, so as to avoid arguing the truthfulness of the broad umbrella term free will, which covers many possible definitions, in favor of discussing which definition was the most accurate, pragmatic and congruent with previously established knowledge.

>is there an actual common underlying external phenomenon
Yeah, I think there is. The human brain is hardwired to assign blame, responsibility and agency to other actors. We account for this tendency, by constructing words such as choice, free will, responsibility and so forth, which we use to describe moral frameworks and understand our own tendencies to judge. These moral frameworks when applied rationally, help us become more consistent in our judgements, and to be more fair and free of bias.

These tendencies are the common underlying phenomenon, which the theories attempt to account for. It's a theory of what it is we should ascribe agency to, basically.

The relevance of the concept, is that without a conception of free will, moral frameworks don't make sense. And we really want to have moral frameworks that are accurate, so that we can hold each other accountable in the ways that are most just, most accurate, or most pragmatic.

 No.384

>>362

Absolutely certain knowledge being possible or not isn't really the question. The question is "is there a true answer at all" even if it's impossible to know the one we have is really the one that's "true". It would be hard to judge models on how they "correspond to reality" if there weren't an underlying reality to begin with, that would be the immutable factor we're trying to build models of and which wouldn't actually change as we adopt different ones.

"No one would take me seriously" isn't exactly a concern, especially when crafting a deliberately absurd example for the purposes of highlighting a problem. The point was that without an underlying commonality between the superficially the same terms (as appears in the "definitions are tightly coupled to models" combined with "models are all there is") used in different models the similarity in names of those terms is meaningless.

>Does free will exist is not the question anyone was interested in answering in this thread.

Ah, well, it might not have been the topic of the OP, but it was more-or-less what I answered to originally (recall my original quote: "Well, whether or not it does exist is in part very contingent on how we define it"). The answer to the question is also relevant in the context of the grid in the OP, since it is what moves us left or right... but the bigger point was whether the question actually makes sense in making the comparisons between the models, as this quote is from the "pure models, model-coupled definitions" section, in which the issue is as I described above.

>The relevance of the concept, is that without a conception of free will, moral frameworks don't make sense.

A working concept might be helpful, but here's a thought experiment: let's imagine a world almost just like ours. In that one there exists a god who's separate from all the people not just in the way that gods usually are, but even further - nobody even knows this god exists. No temples, no prophets, no holy book, nothing at all. Yet this god is real and has access to a celestial control board with many switches on it. One of those switches is for "free will", and it's currently in the "off" position. People live their lives with all their "decisions" being simply an inevitable consequence of the great state-based cellular automaton that is the universe under the hood. This does not affect their experience of it though, all of those decisions feel like they're just their own, even if they were ultimately inevitable because of how things are when the decision was made. One day though the god flipped the switch to "on". Now, people can genuinely make decisions. And they have to, since life keeps going on as before, it's just that a switch for one specific thing was flipped somewhere out there, unknown to all. So, in this brave new world, how do they do it? Well, they unconsciously weight their options. They weight them by what they currently know and believe, they weight them with their experiences and fears and desires, they weight them by the biases of their culture and even their very biology and the physical structure of their brain... and one decision rises to the top and that one is taken. Taking a different one would have required going against a mountain of forces railroading them into it and so would have been practically an impossibility, but it wasn't technically an impossibility. At least that's the change that's expected. The god's flipboard has the labels on some of the switches mixed around, and it's possible that "free will" is one of those and the flip actually turned it off instead of on.

What truly has morally changed when the switch was flipped? And can people even tell what position the switch is in? If so, how? If not, then how can its position matter for them when they create a system for making any decisions at all - including in particular decisions about what behaviours are right and wrong and the extent of each (morality)?

Ultimately it doesn't seem to me like whether there really is or isn't a phenomenon that people assign the label of "free will" doesn't make much difference, so if we have to do it we might as well pick whatever option for that is convenient and just go from there.

 No.388

>>384
>It would be hard to judge models on how they "correspond to reality" if there weren't an underlying reality to begin with, that would be the immutable factor we're trying to build models of and which wouldn't actually change as we adopt different ones.
If this is what you want, I can tell you there is no immutable factor we can call free will, which is consistent across models. Different models have different conceptions of what free will is, in an ontological objective reality, but they start from the springboard of trying to account for the human tendency to want to assign agency and responsibility..

>The point was that without an underlying commonality between the superficially the same terms (as appears in the "definitions are tightly coupled to models" combined with "models are all there is") used in different models the similarity in names of those terms is meaningless.
Something about it is a little bit off about this sentence. Can you restate the idea you're trying to convey here?

>Ah, well, it might not have been the topic of the OP, but it was more-or-less what I answered to originally (recall my original quote: "Well, whether or not it does exist is in part very contingent on how we define it").
Sure, this is what I've been about all along. No one is making the claim, right, that it's not important to define free will, or that free will can be said to exist without knowing what it is, which is why I came out of the gate pretty annoyed at some of these comments people were making in regards to the topic.

>The answer to the question is also relevant in the context of the grid in the OP, since it is what moves us left or right...
Well, it's true that the question is relevant in this sense, but I think focusing on this misses what the most defining feature of either model is. The movement left to right is most of the time dependent on your conception of free will, which is generally what determines whether free will can be said to exist from the point of view of your model or not. Compatibilist definitions of free will are necessearily distinct from hard deterministic conceptions.

What is a pure model, model coupled definitions? I have no sense of what you're referring to, or which issue you're describing.

>Ultimately it doesn't seem to me like whether there really is or isn't a phenomenon that people assign the label of "free will" doesn't make much difference, so if we have to do it we might as well pick whatever option for that is convenient and just go from there.
Well, more than just convenient, right. You're getting closer to characterizing how the actual discussion goes, I think. Philosophers would look for the model that is most pragmatic, the most congruent with reality,  as we understand it, which has explanatory power, and which is produced with methods which we trust and find reliable.

 No.390

>>384
>"No one would take me seriously" isn't exactly a concern, especially when crafting a deliberately absurd example for the purposes of highlighting a problem.
One more thought. Even though I don't understand the sentence that comes after, I can explain why I brought up that no one would take you seriously.

You seem to be of the impression that the approach I characterize is inherently flawed, because it can lead to absurd constructions.

The counterpoint, yes it can, but no one would take this seriously, is a perfectly valid refutation. Yes, it is possible to come up with bad theories, just like it is possible to build bad buildings with architectural theories, or arrive at bad conclusions with the hypethetico deductive model, or make mistakes in any other sense, but there are checks and balances to these kinds of processes.

Just because it is possible to make an obviously stupid model by defining things in really arbitrary ways that technically mean your theory is internally consistent, but is confusing and useless in application, does not mean that this is something that regularly takes place. It doesn't, because we judge theories by many, many other aspects, than internal consistency alone, right.

 No.506

>>388

>If this is what you want, I can tell you there is no immutable factor we can call free will, which is consistent across models.

Consider again the "person as a cuboid" example of a model. It's at odds with other models of people, and definitely not consistent in the slightest with others (for example there's a model of crowd behaviours which defines people as particles undergoing Brownian motions). Yet, the application and investigation of those models provides insight about the same thing, since there's a common thing external to them both (the people of the objective reality) that they are trying to model even if they're doing so in different ways. It's not really about the motivation for creating the model

As for the rest, I think it would be best if I broke down a list of the assumptions I make.

1. A model is a collection of statements and concepts which provides a framework for thinking.
2. When constructing a model, the idealised starting point is the collection being completely empty. A model exists as its own entity which may or may not have a relationship with objective reality.
3. Any and all definitions we put into a model are true by default within it. They're part of the axiomatic building blocks of the model.
4. All inferrences made within a model are relevant only to this same model. The models aren't just completely independent of each other, they're completely agnostic of each other... with the exception of models which include the entire axiomatic + defimitional base of another, allowing them to be linked into a hierarchy. This is a special case where one model exists within another, rather than being a separate thing.
5. As a corollary of the above, when two different models not linked in such a hierarchy use the same term for their definition, that fact technically means nothing by itself - these concepts are completely unrelated to each other, even sharing that name. The definition would actually have to be the same.

-------------------------------------

#1-#5 are what I'd consider pure models. Collections of thought that allow building up on themselves, existing separate, independent and agnostic of each other.

#5 in particular leads to the following example: if model A defines trampolines as bouncy surfaces, while model B defines trampolines as black rectangles, it could be natural to create a grid like so:

trampolines | bouncy | rectangular
------------+--------+------------
model       |   A    |     B

Yet by this point of the list, with just #1-#5, such a grid would actually be misleading, since the "trampolines" of A and "trampolines" of B are not only just not the same thing, but they have nothing to do with each other in the slightest. The common name is effectively a coincidence, whereas the grid instead suggests there's some common thing "trampolines" external to both, that A and B describe/model in different ways. Further on, consider a bouncy black rectangle. It would be VERY natural to say that it's a trampoline according to both models, but actually no, this means falling into the exact same pitfall. This comes from perfectly natural, unspoken underlying assumptions. These assumptions are going to be talked about in #6 and they're actually not a bad thing, but we need to be aware of them.

-------------------------------------

6. The pure models might be interesting, but they aren't actually practical. Instead we make tacit assumptions. We can use defimitions out of a common pool without explicitly doing so (when I defined people as cuboids, note how I didn't define "cuboids" in the first place), which attaches them to our model. Basic facts which are essential for the model to work but which aren't unique to it are also tacitly assumed to be part of it. But then there's also the big one - the assumption that when definitions in different models use the same label to refer to whatever they're defining, they are actually referring to the same thing. The only way to make this work is if that thing is external to both models, and the in-model definition only specifies some aspects of it that the model will focus on. And so the grid for the trampoline example starts to work.

That took a while to write but it should at least enable more specific questions to be asked.

>Well, more than just convenient, right

Nah, "convenient" was right. If a concept is so orthogonal to reality that its truth or falsehood literally has no effect anyone in said reality can detect (so, that blows the "congruent with reality" criterion out of the water from the start), just picking the convenient stance and going from there seems to me to be correct. It's possible discuss what's more convenient, at most.

>>390

I think there's a different sort of confusion, mostly about what I'm attempting to do. What that is  is the use of logic at its most raw, mathematical level, the proof by contradiction. If I take an assumption ("square root of 2 is a rational number") or a set thereof, and from that point on conduct my reasoning using only steps that are logically allowed ("so therefore there must be two whole numbers p and q it is a ratio of that have no common factor. Let's then write it out as an equation sqrt(2) = p/q and do some basic algebra") but arrive at a falsehood/an absurdity ("it turns out that both p and q are even... but at the same time as we said earlier they don't have common factors"), this exposes the existence of an error in the basic assumption ("sqrt(2) is not a rational number after all") or the set thereof.

It's a straightforward purely logical process that under-the-hood doesn't actually rely on anyone's opinion to work. The idea isn't to produce a "useful" object at the end of the construction, the idea is to expose a hole. What we do with that hole is a separate question entirely.

 No.513

>>506
Well this all looks very nice and I like it.

>5. As a corollary of the above, when two different models not linked in such a hierarchy use the same term for their definition, that fact technically means nothing by itself - these concepts are completely unrelated to each other, even sharing that name. The definition would actually have to be the same.
I do have a bit of a problem with this assumption, not because I think it's a bad deduction from the other assumptions, but because I think it doesn't accurately characterize how things tend to work in reality. The concepts are unrelated, in a sense, but they're attempting to account for the same human tendency, so in application this makes a comparison valid. If you're going to build a moral framework, and your framework relies on free will, you have to consider which kind of conception you're going to use. The two conceptions of free will we've discussed in this thread are likely to be contenders for such application, and it's valid to compare and contrast them.

In regards to 6, I agree, and of course I am not claiming that free will in disparate models is always or even usually referring to the same thing.

>Nah, "convenient" was right. If a concept is so orthogonal to reality that its truth or falsehood literally has no effect anyone in said reality can detect (so, that blows the "congruent with reality" criterion out of the water from the start), just picking the convenient stance and going from there seems to me to be correct. It's possible discuss what's more convenient, at most.
That really must depend on what you mean by convenient.

>It's a straightforward purely logical process that under-the-hood doesn't actually rely on anyone's opinion to work. The idea isn't to produce a "useful" object at the end of the construction, the idea is to expose a hole. What we do with that hole is a separate question entirely.
I see what you mean, about how your process works. I find it interesting, and of course it is not unusual to try and strive for this kind of proces in philosophy, but I wonder whether it's of much use to view the world through only a purely logical process, in this particular case. Can you actually say anything relevant with this?

 No.516

>>513

>I think it doesn't accurately characterize how things tend to work in reality

Isn't this covered by #6 though? That to make things practical we do have to bring in a whole host of tacit assumptions? And that it's okay to do so, so long as we're aware it's what's happening?

As you say, there's a human tendency to think of "free will" in a certain way. I'd say it's even a specific aspect of human decision making that these models are trying to address. That aspect is common to them, yet the commonality is not explicitly defined, and what makes it work for enabling comparisons is that it's part of that grab-bag of tacit assumptions we're bringing in. Otherwise the "free will" of one model and the "free will" of another would simply be different unrelated things with the same name. How can you compare such things?

And yes, you do need to pick the specific variant when you want to work within a model. Just be aware that it might not just be a different slice of the same thing, but a different thing entirely.

>That really must depend on what you mean by convenient.

Well, here's a few examples of stances I would call convenient. Consider Russell's teapot, except in a different star system for good measure. The consequences if it were real would be massive, yet we can't check for its existence. The convenient option is to essentially just shrug and proceed with the assumption it's not there*.

Consider the old cliche "if a tree falls in a forest and there's nobody around to hear it, does it make a sound", with its modern day update to account for the possibility of scientific equipment etc. acting as observers, and "hearing" being just whatever form of detecting through observation. What if it didn't make a sound under such circumstances (because, say, the simulation hypothesis is right and the simulation is optimized to save some CPU cycles)? That would be hugely interesting but we'll never know it is so. Might as well shrug, apply the Occam's razor and go with the convenient option of "yes it does".

Consider last-Thursdayism. I like last-Thursdayism, it's a device with a surprising amount of utility. What if the universe isn't actually billions of years old, like the evidence says, but was actually created last thursday in the exact state it was in, evidence of billions of years and all, or on a smaller scale people having memories of their lives before last thursday also being created in that moment? It's perfectly consistent with how the universe is right now and it would be impossible to detect otherwise. It would have serious consequences if it's true and we found out. It's still convenient to not even really entertain the possibility of it being true and just going with the assumption that the universe looks billions of years old because it is.

*this example is different than the others in that if that teapot IS there, we MIGHT actually stumble upon it someday and then we WOULD have to change. But in the meanwhile, the convenient option stays the same, to the point that even if we gain the ability to look for the teapot, actually spending any resources in doing so would be folly.

>Can you actually say anything relevant with this?

It's a convenient and useful tool in mathematics, in very wide use, with certain "obvious" facts being extremely hard to prove if it's not employed. Still, I guess you mean outside of that area. The generic equivalent of this technique would be "reductio ad absurdum". It can be abused into a fallacy, but that happens by using invalid logic to get to the absurd conclusion, making the fallacious version a non sequitur. If we don't do that, well, I'll quote wikipedia:
>The earlier dialogues of Plato (424–348 BCE), relating the debates of his teacher Socrates, raised the use of reductio arguments to a formal dialectical method (elenchus), now called the Socratic method[5] which is taught in law schools. Typically Socrates' opponent would make an innocuous assertion, then Socrates by a step-by-step train of reasoning, bringing in other background assumptions, would make the person admit that the assertion resulted in an absurd or contradictory conclusion, forcing him to abandon his assertion.

Here's an old example of Galileo's proof that heavier things don't fall faster just because they're heavier (caveats like air drag etc. are to be set to a side for the moment, this is a world in which the only variable is weight).
1. Assume heavier things do fall faster.
2. Let's have two stones, a small light one and a large heavy one.
3. Because the small stone is lighter, it would fall slower than the heavier one.
4. If we were to attach both stones together, the large stone would have to drag the small stone to make it keep up, slowing it down. And so our two stones attached together would fall down slower than just the large stone on its own.
5. On the other hand two stones attached together weight more than just the large stone on its own, so they would fall faster.
6. And so if heavier things fall faster just because they're heavier, then combining two stones together makes them fall both faster and slower at the same time.
7. Therefore, the assumption was wrong and just being heavier doesn't make things fall faster.

( source - this vid: https://www.youtube.com/watch?v=sVUMAqMmy7o )

So yes, this is a general purporse tool and a very useful one at that.

 No.628

>>516
Sorry, I've been working out some shit the past few days.

>Isn't this covered by #6 though? That to make things practical we do have to bring in a whole host of tacit assumptions? And that it's okay to do so, so long as we're aware it's what's happening?
I felt like it didn't really, because you characterized it like it was being drawn from a vague notion of free will common in the discourse, and that this vague notion might have ambigiuous meaning which resolves to an individual conception when read by different people, but I think it's more common for a good model to have a specific and defined idea of what free will is, since it's so foundational to assigning moral value to an action.

>How can you compare such things?
I'm not sure exactly what you mean by tacit assumptions, but it seems you can easily compare different conceptions of free will by examining the assumptions behind the conceptions, how they differ in the ways they tie free will to reality, and what the consequences of implementing different conceptions into a moral or ethical framework would be. There's plenty to compare, I think.

Are you suggesting that there is nothing to compare or what do you mean?

>Well, here's a few examples of stances I would call convenient. Consider Russell's teapot, except in a different star system for good measure. The consequences if it were real would be massive, yet we can't check for its existence. The convenient option is to essentially just shrug and proceed with the assumption it's not there*.

I agree, but I don't think just saying convenient is sufficient explanation for why we don't want to believe this hypothesis.

In essence, what your examples illustrate, is why we like falsefiability theory, the idea that any good theory must be falsefiable, it must take risks.

There is more to that than it just being convenient, it's a convention we know is one part of many required to produce excellent knowledge. But it's not the only way one can judge knowledge, and I think you know this, but you're kind of ignoring it, or you are genuinely just a critical rationalist to the exclusion of all other kinds of ways of evaluating knowledge?

Obviously, aside from falsefiability, which is the only theme present in all of your examples, there are a host of other ways to evaluate knowledge, and many of these can easily be used to evaluate theories of determinism and free will as well.

I would only have to look to pragmatism, coherentism, cynicism, positivism or rationalism to find solid frameworks that can be used to evaluate these models.

Or what do you think? Is knowledge molded by all of these frameworks encompassed by the word choice 'convenient' knowledge, and is this a fair characterization of the complex process of evaluation that these theories go through in development and peer review?

>So yes, this is a general purporse tool and a very useful one at that.
I guess I meant more like, can you actually say anything useful in this context, by taking a hard line deductive approach and examining the assumptions of, for example, compatibilist or hard determinist theories.


[]
[Return] [Go to top]
[ home ] [ pony / townhall / rp / canterlot / rules ] [ arch ]