Anyone want to examine my ethics?

Drop by and talk about anything you want. This is where all cheese-related discussions should go
Darkflame
Posts: 402
Joined: 21 Jul 2014, 05:06
First Video: Quantum Documentary

Anyone want to examine my ethics?

Postby Darkflame » 31 Aug 2017, 06:45

Ok, a bit weird, but what follows is random ramblings on my attempt to find a universal ethical principle.
Been thinking about this stuff a long time, kinda needed to finally get it down.
(This isn't preaching everyone should follow this either. More I just want to get my ideas out there/feedback etc.)

Over my life I have slowly been trying to develop and refine a ethical principle. A simple guide to how to act in order to be a good person. Not that I wont fall short, but a thing to aim for.

"Do no harm" is a simple one, for example.

I used to be more specific though, and inspired a bit by Asimov I had;

"Though shall not though action or inaction allow a sentient being to come to harm"

To me action/inaction are essentially inseparable, any conscious choice to pick one future over another carries at least some responsibility. "sentient being" was simply me replacing asimovs "humans" with something more generic.

Eventually I shortened this idea down though;

"Minimise harm"

After all, its very hard to do actions that dont harm some people in some way, regardless of how slight. Actions have all sorts of consequences.
This can be interpreted as looking for the "lesser evil", which I think is fine, provided your honestly trying to reduce net harm overall - and not merely using _some_ benifit as a excuse for something you want to do anyway.

However, I then realised that "harm" means different things to different people. I have shifted the complexitys of the world and what is "good" onto a single word.
Not only does the word mean different things to different people, different people consider different things "harm" to themselves as well. Without a firm definition its kinda useless.

So I tried to define it;

"Minimise Harm*

*Where Harm is defined as something the potential 'harmed' person would not want to have happen to them."

I was quite pleased with this. Its theory-of-mind based, but I think thats as it shouuld be. Its not about what *I* consider harmed, its about what the potential harmed person thinks does. *Would they want this happening to them?* Is the question to ask if determaining if someone is being harmed or not.

So, this takes into account various relgious/cultural/individual preferances.
However, it doesn't take into account how people change day to day and over their life.
A child might not like getting vacinated - but as a adult they probably appricate that they were.

So I had to write a patch;

"Minimise Harm*

*Where Harm is defined as something the potential 'harmed' person would not want to have happen to them - assuming that person has full knowledge of the situation, and its consequences, and has the mental compitance to process it"

Not ideal, as it involves a lot more judgement calls. But I never said good ethics would be easy.
Shortly after that though, I realised there was a more simple way to look at it that amounted to the same thing;

"Minimise Harm*

*Where Harm is defined as something the potential 'harmed' person _later_ would not have wanted to have had happen to them"

ie.
Consider the future; would this person have wanted this to happen to them in their life?
It still involes extrapolating both future events, and what the other person feels, but its expressed/concept seems a bit more simple.

That said, its still messy to have a definition like this.

Eventually, just a few months back I had a eureka moment and think I got a ethical equilivent that amounts to the same thing, but as a concept seems to work a lot better.

You see, once you hit upon the idea of "would this person have wanted this in their life?" I think you hit upon a more fundimental idea then just the negative of "harm".
What state do people want to be in?

So rather then saying "minimise harm"

Why not the positive;

"Maximise people being in the state they would want to be in."

?

Now "state" could be anything, but it clearly excludes anything that would be considered harm in the early definition, while also encourging things to get better.
Can you make a persons life a bit better without making other people in a state they would not want to be?

Its still not by any means a easy principle to follow. Not by a long shot. Its not a simple rule that can be applied blindly - applying this to anything important like politics would take a lot of thinking, and you have to be very careful to be honest and without preconceptions when you do so.
But its easy to understand right?

Does it have obvious flaws ? Things overlooked.
"bad things that can happen if this idea is used as a guide to my actions in life"?

oh, and thanks for listening to my ramble :)
http://www.fanficmaker.com <-- Tells some truly terrible tales.
--
Last update; Mice,Plumbers,Animatronics and Airbenders. We also have the socials; Facebook & G+. Give us a like if you can, it all helps.
User avatar
Elaro
Posts: 428
Joined: 01 Jun 2007, 17:43
First Video: Three PS3s
Location: In Montreal

Re: Anyone want to examine my ethics?

Postby Elaro » 04 Sep 2017, 18:41

(Yeah, ethics thread!)

Reformulated, I think your last statement could be written as: "Minimize the difference between the Universe and the Willed Universe",
where "Willed Universe" means "the description of a Universe where a maximum of people are in a state of maximal satisfaction for a maximum of the time";
where "state of satisfaction" means, of a person, that they have "accomplished or are in the course of accomplishing their goals, whatever they may be".

I think another way to look at it would be to say: Good is that which "maximize the number of accomplished wills", where a "will" is "a set of goals held by a person", where "goal" is "a description of a part of the Universe such that the person having the goal wants the description to be true and will act, if it can, to accomplish it", where "accomplishing" a [will|goal] means "doing all that is good in order to make it true".

Of course, this is computationally VERY INTENSE, but this is an ideal that we can approximate towards, I hope.
Image
Image
User avatar
AdmiralMemo
Posts: 7358
Joined: 27 Nov 2011, 18:29
First Video: Unskippable: Eternal Sonata
Location: Baltimore, Maryland, USA
Contact:

Re: Anyone want to examine my ethics?

Postby AdmiralMemo » 05 Sep 2017, 22:05

Just want to note that if the principle is extended from one person to groups of people, then there's always the question of the rights of the individual vs. the well-being of the group. "The needs of the many outweigh the needs of the few or the one." But, conversely, is it ethical to chop one person up against their will to give transplants to 25 others, who can then live longer, happier lives, accomplishing their own wills? Democracy is always dangerous to toy with, especially if people don't have inherent rights.
Graham wrote:The point is: Nyeh nyeh nyeh. I'm an old man.
LRRcast wrote:Paul: That does not answer that question at all.
James: Who cares about that question? That's a good answer.

Image
User avatar
Elaro
Posts: 428
Joined: 01 Jun 2007, 17:43
First Video: Three PS3s
Location: In Montreal

Re: Anyone want to examine my ethics?

Postby Elaro » 06 Sep 2017, 16:05

AdmiralMemo wrote:Just want to note that if the principle is extended from one person to groups of people, then there's always the question of the rights of the individual vs. the well-being of the group. "The needs of the many outweigh the needs of the few or the one." But, conversely, is it ethical to chop one person up against their will to give transplants to 25 others, who can then live longer, happier lives, accomplishing their own wills? Democracy is always dangerous to toy with, especially if people don't have inherent rights.


Oh right, I forgot: ethics is the art of ranking courses of actions from best to worst. Anything that is less than the best available strategy is unethical.

As to your question, no, it's not ethical. Because in a society where doctors kill healthy people to save the sick, no one is ever safe, and "being safe" is a pretty universal goal, not to mention a vital one for a number of other goals. Therefore, it shouldn't be policy.

Another approach is to doubt the power of Humankind to make such a transplant: that is to say, if you kill someone, you might not even manage to save others, rendering the sacrifice useless. In fact, taking someone's organs against their will is an almost surefire way to get them damaged, therefore making the act pointless.
Image
Image
User avatar
AdmiralMemo
Posts: 7358
Joined: 27 Nov 2011, 18:29
First Video: Unskippable: Eternal Sonata
Location: Baltimore, Maryland, USA
Contact:

Re: Anyone want to examine my ethics?

Postby AdmiralMemo » 15 Sep 2017, 19:08

Fair points. I was mainly playing devil's advocate there just to make sure everything was on the right track. And, of course, the "transplant" idea wasn't considering the realities of transplant rejection and such. It was just a thought experiment with a 100% transplant success rate.

However, one sticking point remains for me: who gets to decide what strategy is "best"? Wouldn't different people have different ideas of "best" and so "ethics" becomes personal? What's "ethical" to one might not be to another?

My own personal idea goes something like this. Imagine an omniscient, omnipotent, omnibenevolent being. (This could track into both Theodicy and the Problem of Evil, but I really don't want to derail the thread like that with religion. However, we don't need to worry about that really, because all that's needed is the thought experiment to set it up, and then it's not needed anymore.) So, just for the sake of the thought experiment, take it that this being exists. This being, when given a situation, would naturally do the ethical thing. If this being knows all, can do anything, and wants to do the best good thing, this being would do it. So that action would be the most ethical action in the situation. That would be the goal to reach.

Now, we come back from the thought experiment to real humans. We are not omniscient or omnipotent, and, try as we might, we're still not omnibenevolent. But we should still strive to approach whatever action that we theorized earlier. That's still the goal of the most ethical action, even if we don't or can't know what it would be. We're limited in our knowledge, power, and good-will. So we do the best action that is available to us. We can make even more ethical actions by increasing one or more of those factors. If we know more, we can make better decisions. If we have more power, we have a wider range of options to choose from. If we're more kind and caring, we'll more often choose the options help more people the best.

So "ethics" can become like a spectrum, where the goal is bounded in 3-dimensional space by knowledge, power, and benevolence. The question then remains: Do certain actions exist that are never ethical, even when you vary the 3 variables in different ways? Are some lines never to be crossed? Or are they only unethical to us currently because we have a certain amount of knowledge, power, and benevolence?

This is certainly a question that I can't answer, and I'm sure greater minds than mine have mulled it over for a long time.
Graham wrote:The point is: Nyeh nyeh nyeh. I'm an old man.
LRRcast wrote:Paul: That does not answer that question at all.
James: Who cares about that question? That's a good answer.

Image

Return to “General Discussion”



Who is online

Users browsing this forum: Bing [Bot] and 41 guests