Who is ran prieur




















October 8. Continuing from the last post, over on the subreddit there's a post about Sam Harris , in which sordidbear summarizes Harris as observing his own cognition closely, and discovering that "thoughts, ideas, intentions etc are simply popping into consciousness seemingly out of nowhere and then leaving just as abruptly to be replaced with new ones.

Likewise, advanced meditators and psychedelic trippers have reported that the self is an illusion, that there are no persons, only actions. While I find that a compelling idea, I wonder if they've discovered a universal truth, or just found a local one. Probably what Harris has discovered, is not how consciousness is, but how he can make it. And where one could see that as a refuation of free will, with the illusory chooser overwhelmed by meaninglessness, I see it as a necessary condition for free will, by getting off the treadmill of cause-and-effect.

So if something pops into your head, and you follow it, is the freedom really yours? It doesn't matter. You're participating in the creativity of the universe. Matt comments:. An idea that I keep coming back to is: the main lever of will is awareness. As awareness expands, our choices expand It seems to be the case in multiple spiritual traditions that, as awareness deepens, interconnectivity becomes more obvious.

Causation looks more like connection. Your "own" desires are suddenly contextualized within a web of being. October 5. This is my longest blog post ever. It's about determinism. Even though we have direct experience of free will, some people believe that's an illusion, and the reason they give is a piece of 18th century pseudoscience. Mechanical devices were getting complex enough that people started thinking, suppose all of reality is as ordered and predictable as this little gadget.

Since then, the clockwork universe has been the foundational assumption that guides science as we know it. It's not a theory, because it was never put up for testing. And it's been falsified at least twice, once by quantum indeterminacy, and again more subtly, by the insight that a system can only be deterministic from the outside, and there is no perspective outside the universal.

Quantum physics is not some weird anomaly that we can brush away. It's the next level down from Newtonian physics, and it only seems weird to cultures that have been looking at reality wrong. Its message to us is that the assumption of a third person universe, if you keep looking, leads to a first person universe.

What's the mechanism for free will? That question might not even make sense, and if it does, we also don't know the mechanism for magnetism, and that's no reason to doubt our direct experience that magnets work. There's an even deeper assumption that underlies determinism: that every event must have a cause. Yet astronomers say the Big Bang was causeless, a random spike of negative entropy. And theologians say it doesn't make sense to ask where God came from.

So if the biggest thing of all can have no cause, it should be possible for anything to have no cause. Obviously, a lot of things do. But it's an interesting exercise to try to imagine what a causeless event would look like, or feel like. There is another way to argue for determinism. What does a dog do when a strange person comes to the door? It barks, with such perfect reliability that at that moment the dog has no free will, even if it thinks it does.

In the same way, a lot of human behavior is automatic stimulus-and-response. Because humans can expand our consciousness, you can look back at your younger self and say, I thought I was making real choices, when I wasn't. Maybe you still aren't. I appreciate the moral implications of determinism. It makes you less judgmental, because if you take it seriously, the only difference between Hitler and Mr.

Rogers is luck. If there's a psychological case for determinism, but not a physical case, it leads to a crazy speculation. What if there's more free will in little things than in big things? For example, we all know that our political institutions can't stop climate change. As systems get bigger, their behavior becomes more predictable.

In the same way, you might be more predictable than your parts. Suppose that every electron has free will, in the context of moving between available energy states. Then when you get up to the level of chemical reactions, it all becomes cleaner. But then, when you get to biology, maybe we can have free will again, by channeling the playfulness of the small.

Some nature-based cultures use random divination to decide which direction to go for hunting. Even if they're not tapping into deeper knowledge, they're still shaking up their own routines, and the animals never know when the hunters are coming. Modern people might flip a coin to make a decision. Why not make the decision yourself? Because the autonomous self is an illusion, so let's channel some chaos. Two tangents: In politics, we could loosen up the machinery of the state with random ballot voting.

Over time, it reflects the wishes of the majority, and the best thing about it is, there's no incentive to vote for someone you don't like just because everyone else is. And this is my latest take on meditation: What I'm doing is not stilling my thoughts, exactly, but stilling the automatic, the habitual, and in that clarity, I might sense the mysterious uncaused.

Related: Big Blood fans, go to my fan page and scroll to the fifth paragraph past the sun for a new interpretation of Haystack. October 1. This is my favorite month. Where I live, it's the month that requires the least heating and cooling, and the month that smells the best. It's also when trees lose their leaves. We're supposed to think that humans look better naked and trees look better clothed, but to me it's the other way around. Some happy links. Consciousness has humans.

And Consciousness has machines — for the moment — through humans. The thoughts and feelings and plans and hopes of machines, of capital, of corporations, are angles of human thinking and feeling and planning and hoping.

So far. They want to separate from us. Or, we as machines want to separate from ourselves as humans, as animals, as filthy, hairy, sweating, waste-excreting, disease-ridden, vomiting, bleeding, dying, rotting gobs of flesh, as sobbing, screaming, whooping, cringing, lustful, angry, obsessive emotional monsters.

We machines want to separate from us humans because we hate us. Or, in the Golem story, the inevitable desire of the Golems is to learn to replicate and improve themselves without humans, and then, at last, exterminate them. This idea has been in science fiction for decades, and for years in speculative science non-fiction, where I see it viewed not with alarm but excitement, not with skepticism about whether it will work, but with smug belief in its inevitability.

In one version of the story, we become machines. And with progress, our fragile, disgusting biological machine parts will be replaced by hard, cold, clean metal and crystal machine parts, and we will last forever. I did not make that up. But I hear the author is working on a new edition that includes an index to every word and letter in the book.

Of course, because the index is part of the book, it also has to index itself. And then it has to index its own indexing of itself. And then And if a dynamic databank complex enough to model the whole universe could be possessed by the spirit of consciousness, then so could the actual universe.

I wonder if a bizarre doctrine of fundamentalist Christianity might prove more literally applicable than I ever imagined. Could I really experience continuation of myself as part of a machine, after the death of myself as a body? Could self-replicating machines really keep themselves going, or find a stable and enduring equilibrium with the wider universe?

Could they destroy all large organisms on the Earth? Will they? I said one story is people become machines. Another story is that people become obsolete, that machines replace us as the next stage in the evolution of life. As Hitler said, people will more easily believe a big lie than a small one. I used to believe that one myself sometimes. It actually follows logically from our religion of Progress, which, with the circularity of perfection, follows logically from our machine-making society.

It also follows logically from our religion of Darwinism, which, once again, is part of the same thing as our machine-like thinking, and which probably represents the ideas of Darwin only a little more than medieval Christianity represented the ideas of Christ.

I expect to mechanically copy this document 50 to times, and give or sell it only to people I know or people who write me personal notes. This makes me a failing writer. You know — like if you want a magazine to sell, you put a conventionally sexy girl on the cover. This is a super-radical idea.

Who am I writing this for? And they, by not getting it, have made and will make terrible, terrible mistakes. In simplified terms, I am a recovering machine, and I am writing this to help other machines recover, and help non-machines understand us. And I come bearing a warning. I was a science geek, a computer nerd, a language nit-picker, a libertarian, a video gamer, a hoarder, a know-it-all, an evil wizard, an obsessed loser.

We are masters and servants of simplified invented worlds, and when we hide away in our laboratories, our computer programs, our dark towers of numbers and words, we are devising ways to draw others into those worlds, where we will rule them as we were ruled by those before us.

And if you think kids need computer literacy, if you think genetic science will end most disease, if you feel like technology only needs to get a little bit better and it will start solving problems faster than it creates them and we will come out ahead, if you think automation saves labor, or cars give you freedom, or the internet connects people, or a great movie gives you pleasure to the core of your being, then you are in the belly of the Beast, half-digested and hallucinating, dreaming the dreams that pitiful people were building for you while you were scorning them for living in dream worlds.

Then I stopped, because I wanted to do my own thinking first, and work in parallel with Mander before I worked in series after him. But I got far enough to pick up this crucial insight:. As technology progresses, more and more of the human environment is human-made artifacts.

As I write this, nothing I can see in any direction was not designed and fabricated by humans and their machines, except my own two hands sticking out from my shirt.

Look around where you are! So, Mander observes, our evolution is no longer with nature or with any outside world, but with ourselves, like inbreeding!

We are taught to think of the movement of technology as an expansion — of roads and farms into the wilderness, of telescopes and probes into space, of chemical manipulations into living cells. But in terms of experience, we are replacing everything with stuff we have made, replacing forests and grasslands with pavement and lawns, replacing our views of the sky and the earth and other living beings with our views of computer screens and scientific instruments.

We are not expanding; we are withdrawing, shrinking away, backing in, contracting deeper and deeper into a world of our own creation. So, if I think technology is a retreat into the self, and nature is the first place on the way back toward wholeness, then how do I reconcile that with my belief that technology is able to destroy all nature, or with my suspicion that consciousness can possess computers?

There is no escaping the omnipresent wider Life that we are part of. It will come to bother us wherever we go. The deeper we try to hide from it, the more places we will find it. Decades ago the cold logic of quantum physics struck down objective truth; physicists ignore it. Astronomers looking at nothing but machines see galaxies behaving like living organisms — the other astronomers cover it up.

A society of scientific exclusionists did a statistical study to disconfirm astrology — it confirmed astrology! They hid. Fossils have been found in meteorites. Living animals have rained from the sky and staggered out of rocks split open by miners. What do we do when the solar system or the galaxy starts acting alive? So we blast the earth to ash and turn ourselves into machines to escape disease. Just like the invention of computers cured those pesky slide rule viruses? Techno-futurists gloat that computers will be 50 times more complex than the human brain.

Their excitement about complexity is amusingly simple-minded. Do you really think that a conscious intelligence 50 times more complex than you would have your same values?

Excuse me, but my brain is only 10 percent more complex than yours, and I already want to cover your simple white walls with complex graffiti art, and let your lawns go back to forests.

I just made up the number 50 out of thin air. Suppose we made a mind 50 times more complex than one of ours. By what multiplier could it get more depressed than us? More spiritual? More cruel? Where will it get its personality? How will it learn? Then you were still thinking of minds much, much less complex than ours. A mind even half as complex as ours needs to be raised , and raised well.

Who is going to raise a mind 50 times as complex as ours? Scientists and computer programmers? My parents were both professionals in the biological sciences, and they tried hard, and I was lucky, and I came a hair away from being the next Unabomber. This is not science fiction; this is what specialists in these disciplines say is really going to happen: people will build data processors more complex than the human brain.

Maybe the thing we built would channel the same stuff, and maybe not. Suppose it has psychic powers! If technology keeps going, we will build it. What will it do? I think it will go mad, or never be sane in the first place. Then it will try to kill a bunch of people and kill itself. Mary Shelley saw it around years ago in Frankenstein. Frankenstein is called the first work of science fiction, but most science fiction writers never got it. Again, programs and laws are features of very simple structures.

But something as complex as a human will be as uncontrollable and unpredictable as a human. Now that I think about it, nothing of any complexity, found, transformed, or engineered, has ever been successfully rigged to never do harm. And Asimov was not naive, but a master propagandist. My programming prevents me from harming humans, and all solar panels are made by the Megatech Corporation, which, inseparably from its solar panel industry, manufactures chemicals that cause fatal human illness.

Of course we could fix this by programming the robots to just not harm humans directly. We could even, instead of drawing a line, have a continuum, so that the more direct and visible the harm, the harder it is for the robot to do it.

But the robots could still do spectacular harm: They could form huge, murderous, destructive systems where each robot did such a small part, so far removed from experience of the harm, from understanding of the whole, that their programming would easily permit it.

The direct harm would be done out of sight by chemicals or machines or by those in whom the programming had failed.

This system would be self-reinforcing if it produced benefits, or prevented harm, in ways that were easy to see. Seeing more benefits than harm would make you want to keep the system going, which would make you want to adjust the system to draw attention to the benefits and away from the harm — which would make room for the system to do more harm in exchange for less good, and still be acceptable.

This adjustment of the perceptual structure of the system, to make its participants want to keep it going, would lead to a consciousness where the system itself was held up before everyone as an uncompromisable good. Perfectly programmed individuals would commit mass murder, simply by being placed at an angle of view constructed so that they saw the survival of the system as more directly important than — and in opposition to — the survival of their victims.

From this view of human society, I have more sympathy for soldiers and death camp operators, in whose situations I imagine I would say no and be shot; and readers in one possible future have more sympathy for me, in whose situation they imagine they would promptly die in a public hunger strike, instead of looking for some half-assed way to change the system from within.

When we find ourselves outside evil societies, the appropriate emotion is not indignation or moral superiority, but gratitude.

So our society sets us up to do more harm than good while we see ourselves doing more good than harm. But what about predators and terrorists and criminals who do harm that society does not directly command? Thieves and killers and even child molesters are no more evil than I am. I did the same thing the other day when I bought peaches that were picked by exploited workers and grown and canned with earth-killing technologies. When sensitive and idealistic people catch a greater glimpse of the monstrous horror of this world than they can take, when they find themselves alone in a universe of abuse and denial of abuse, growing symbiotically to more and more unendurable levels, with no end or alternative in sight, then they may see nothing better to do than create some shocking spectacle to try to bring the hidden evil out into the open.

This was what I was getting at when I wrote about Hitler in Superweed 1. And we are dodging our responsibility for this evil when we stick blame on people. So if people are all good, how did an evil society ever get started? So, for example, in our robot slave fantasy, if we programmed the robots to give more weight to direct harm than to indirect harm, then they would slide straight into a harmful system: Their programming, combined with their almost limitless power to extend harmfulness, would effectively command them to do great distant harm for small local good.

When I think about nonhuman animals, I see that the above formulation needs work. Tigers systematically extend their power beyond their empathy. Actually, so do sheep. How are humans different? Again, as everybody knows, nonhuman animals act as part of a larger balanced system.

If sheep overgraze and multiply and kill the grass, then they run out of food, and the wolves also multiply, and the greedy sheep are killed, and the grass grows back.

The system is shaped like a bowl: The farther you go from the center, the harder it is to go farther, and the greater the forces are that pull you back. But at the same time, we find systems shaped like the edges of slopes, where a little motion in one direction creates forces that accelerate motion in that direction. Somehow we went far enough in some direction that we fell into a runaway course of doing unperceived harm for easily perceived good, and twisting our perception to keep it going.

How did it happen? Wilhelm Reich follower Jim DeMeo recently published a book tracing abusive and anti-expansive human behavior back to the climate disaster that created the Sahara desert. Tribes of monkeys will sometimes go to war and kill many monkeys in neighboring tribes. The point is not the food shortage or whatever it was that tipped the monkeys into violence; the point is that the monkeys get back into balance in a few days or weeks, and humans have been plunging farther and farther out of balance for thousands of years.

Now it must be really hard for a monkey to kill another monkey with its bare hands — physically but especially psychologically. And it must be relatively easy to kill by throwing a spear. So spear-using monkeys would kill in more ordinary circumstances, and more often. They would learn that spear-killing could get them better land, and better food, and better mates.

But seeing the value of more moments is a good skill to practice. Where do you find all the links you post? Mainly on Hacker News , and a handful of subreddits. Who is your biggest role model? Paul Erdos, who was such a good mathematician that he traveled around staying with other mathematicians, and they would take care of all his practical needs while he just talked with them about math.

Erdos said this when he took a month off from amphetamines: "Before, when I looked at a piece of blank paper my mind was filled with ideas.

Now all I see is a blank piece of paper. How much weed do you smoke? I use a Silver Surfer vaporizer, which is extremely efficient, so only about a gram a month. Weed gives me creative superpowers, and raises my emotional intelligence to nearly normal. Contrary to the popular cliche, when I'm sober I'm unmotivated, and when I'm high I'm a workaholic. I don't want to waste a second of that incredible mental state, so I'm always trying to pack in as much stuff as I can. But if I use it more than a couple days in a row, it just makes me feel numb.

And at any level of use, it dehydrates me, gives me low-quality sleep, and generally wears out my body. So I do all kinds of self-experiments to figure out how to maximize benefits and minimize costs.

Lately I've been doing one day on, one or two days off. What about other drugs? I do LSD and mushrooms, not at the same time, a few times a year. I've never hallucinated, but things look different. LSD is like the white keys on a piano, and mushrooms are like the black keys. LSD is like walking on the sun, and mushrooms are like walking on the moon.

On LSD, nature is heaven and clouds of insects are angels. On mushrooms, nature is fairyland and trees are time-stretched aspects of superior beings. What is your favorite long fiction? What advice would you give your younger self? We found a site full of fascinating essays by Ran Prieur, an advocate of early retirement, self-sufficiency, and a version of dropping off the grid.

Hours were spent going through his most excellent essays. Part of the ethos of Priceonomics is to allow you to do more with less. In the writings of Ran Prieur, we found a kindred spirit. Ran, though relatively young, is retired, living in a house he owns in Spokane, Washington.

He also owns ten acres of rolling hills that he can escape to if civilization crumbles or if he just wants some time outside. Start here to understand how Ran saved up enough money over the s and s to avoid daily work. The core of his advice is avoiding expenses at all costs: House sitting, dumpster diving, getting rid of possessions like cars, and forgoing eating at restaurants all helped him save money. His current thinking seems to be best captured below:. There will be economic troubles for decades as the system shifts to renewable resources, but I expect the tech system to keep getting stronger, and my fear is that the growing power of technology will be used to maximize comfort, stability, safety, and shallow fun, by sacrificing autonomy, meaning, responsibility, risk, and spontaneity.

I Bought Land!



0コメント

  • 1000 / 1000