The suck is why we're here

(nik.art)

430 points | by herbertl 1 day ago

32 comments

  • yakattak 1 day ago
    > Because I don’t write a daily blog to crank out a post every day. If that was the point, I’d have switched to AI long ago already. I write a daily blog to make sure I remember how to think.

    I feel like this will get missed by the general public. What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

    I could generate some weird 70s sci fi art, make an Instagram profile around that, barrage the algorithm with my posts and rack up likes. The likes will give that instant dopamine but it will never fill that need of accomplishing something.

    I like LLMs to get me to reword something, since I struggle with that. But just like in programming I focus it on a specific sentence or two. Otherwise why am I doing it?

    • Biganon 21 hours ago
      > What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

      That's how I feel with programming, and sometimes I feel like I'm taking crazy pills when I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects. Don't they miss the feeling of..... programming? Am I the weird one here?

      And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.

      Weird times.

      • noduerme 15 hours ago
        You're not crazy at all. I engineer pretty-big full stack systems for a living, as a lone coder. I relish when I actually sit down and write the code. To turn a customer concept into animated UI functionality. To write a cron task that auto-generates a weekly prize contest. To hand-craft SQL and add a new feature that lets people see 10 years of data in a new way, on an old codebase.

        I've let Claude run around my code and asked it for help, etc. Once in awhile it's able to diagnose some weird issues - like last month, it actually helped me figure out why PixiJS was creating undefined behavior after textures were destroyed on the GPU, in a very specific case. But the truth is, I wouldn't hire an intern or an employee to write my code because they won't be able to execute exactly what I have in mind.

        Ironically, in my line of work, I spend 5x as many hours thinking about what to build and how to build it as I do coding it. The fun part is coding it. And, that's the only time I charge for. I may spend 10 hours thinking about how to do something, drawing diagrams, making phone calls to managers and CEOs, and I won't charge any of that time. When I'm ready to sit down and write the code:

        I go to a bar.

        I turn my phone off.

        I work for 6 hours, have 4 drinks, and bill $300 per hour.

        I don't suspect that the kind of coding I'm doing, which includes all the preparation and thought that went into it, and having considered all edge cases in advance, is going to be replaced by LLMs. Or by the children who use LLMs. They didn't have much of a purchase on taking my job before, anyway... but sadly the ones who are using this technology now have almost no hope of ever becoming proficient at their profession.

        • kaffekaka 11 hours ago
          What you describe sounds very pleasant and I am sure it leads to great results. I kind of envy you.

          However, these two things are different: the kind of work that feels fulfilling, meaningful and even beautiful, versus: delivering the needed/wanted product.

          A vibe coded solution that basically works, for a quarter of the cost, has advantages.

          • moron4hire 10 hours ago
            We can choose not to live in a throw-away society by first not treating our own work as throw-away.
            • nradov 4 hours ago
              The notion of intentionally making work harder and less productive as some sort of protest against society seems so bizarre and self defeating. No one else will even notice or care. There will be zero broader impact.
            • kaffekaka 8 hours ago
              I agree. But I also think that workflows like that of noduerme might be due to his own preferences as much as the needs of the customer. I am sure it is a good process, but it is also something that feels good for the developer himself. So there will be a drive to use it not for business reasons but for personal. Then it is not based on business needs.
            • throwaway98797 9 hours ago
              most things don’t endure

              greater chance something will if we take more swings

              • A4ET8a8uTh0_v2 8 hours ago
                I don't know about that. Too much is allowed to not endure. I don't want to push on that point too hard, because I get what you are saying: things that are worth something will persist. Still, it would be nice if we didn't have the ridiculous churn of stuff.. that does nothing but gather dust only to be thrown away.
              • rewgs 7 hours ago
                Fully disagree. First, I question the value of something merely enduring. But that aside, implicit in what you're saying here is that the "skill of the swing," so to speak, doesn't matter, whereas only the quantity of swings is what matters. Baseball players clearly negate this.
            • rewgs 7 hours ago
              Yup. "The downfall of society begins with the individual."

              https://x.com/lillybilly299/status/1865133434839990601

        • spockz 15 hours ago
          I like how you work! To be fair though, all of that quality and the other work of thinking is probably already included in the €|$300/hour rate.
          • noduerme 12 hours ago
            The $300/hr rate is, to be honest, quite cheap when you don't charge all the time that went into meetings and preparing to write the code. It's probably more like $60/hr if you included all that. However, I don't need to account for my whereabouts the rest of the time, and I can just show a log of the code in progress if I'm ever asked about the time I bill. Of course when you actually sit down and begin to write something new, you begin actually thinking about modules and namespaces and consolidating functions and which things you can streamline, and so on... which is why it's fun. You may change your mind several times as you realize that all of this behavior should go into the parent class or something like that. [I have a special $150/hr rate I sometimes bill for "yak shaving" - clients appreciate it, actually.] But then it's just about painting something which you already have in your mind. I prefer to be paid for my painting, by the hour, rather than ever charging a project rate. I'm always concerned that my consulting is going to be mistaken for wasted time. I never want to be accused of wasting a client's time or overbilling; but they understand that when I sit down to write it, it will be done right the first and last time, and that it could not be done any faster or better than that.

            Coding is not making a thing that appears to work. It's craftsmanship. It's quite difficult to convince a client that something which appears to work as a demo is not yet suitable or ready for production. It may take 20 more hours before it's actually ready to fly. Managing their expectations on that score is a major part of the work as well.

      • svara 15 hours ago
        > And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.

        This I think I can explain, because I'm one of these people.

        I'm not a programmer professionally for the most part, but have been programming for decades.

        AI coding allows me to build tools that solve real world problems for me much faster.

        At the same time, I can still take pride and find intellectual challenges in producing a high quality design and in implementing interesting ideas that improve things in the real world.

        As an example, I've been working on an app to rapidly create Anki flashcards from Kindle clippings.

        I simply wouldn't have done this over the limited holiday time if not for AI tools, and I do feel that the high level decisions of how this should work were intellectual interesting.

        That said, I do feel for the people who really enjoyed the act of coding line by line. That's just not me.

        • omnicognate 15 hours ago
          > coding line by line

          This phrase betrays a profoundly different view of coding to that of most people I know who actively enjoy doing it. Even when it comes to the typing it's debatable whether I do that "line by line", but typing out the code is a very small part of the process. The majority of my programming work, even on small personal projects, is coming up with ideas and solving problems rather than writing lines of code. In my case, I prefer to do most of it away from the keyboard.

          If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.

          • jason_oster 5 hours ago
            > If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically.

            So I take it you don't let coding agents write your boilerplate code? Do you instead spend any amount of time figuring out a nice way to reduce boilerplate so you have less to type? If that is the case, and as intellectually stimulating as that activity may be, it probably doesn't solve any business problems you have.

            If there is one piece of wisdom I could impart, it's that you can continue enjoying the same problem solving you are already doing and have the machine automate the monotonous part. The trick is that the machine doesn't absorb abstract ideas by osmosis. You must be a clear communicator capable of articulating complex ideas.

            Be the architect, let the construction workers do the building. (And don't get me started, I know some workers are just plain bad at their jobs. But bad workmanship is good enough for the buildings you work in, live in, and frequent in the real world. It's probably good enough for your programming projects.)

          • noduerme 15 hours ago
            It's not the typing, obviously, you're right. I think the parent is talking about it being an "intellectual exercise" to organize their thoughts about what they wanted to see as a result, whereas we who enjoy programming enjoy the exercise of breaking down thoughts into logical and algorithmic segments, such that no edge cases are left behind, and such that we think through the client's requirements much more thoroughly than they thought through them themselves. A physician might take joy in finding and fixing a human or animal malady. A roofer might take joy in replacing a roof tile, or a whole roof. But what job besides coding offers you the chance to read through the entire business structure of.. a lawyer, a doctor, a roofing company, a bakery.. and then decide how to turn their business into (a) a forward-facing, customer-friendly website, and (b) a lean data-gathering machine and (c) a software suite and hosting infrastructure and custom databases tailored to their exact needs, after you've gleaned those needs from reading all their financials and everything they've ever put out into the world?

            The joy of writing code is turning abstract ideas into solid, useful things. Whether you do most of it in your head or not, when you sit down to write you will find you know how you want to treat bills - is it an object under payroll or clients or employees or is it a separate system?

            LLMs suck at conceptualizing schema (and so do pseudocoders and vibe coders). Our job is turning business models into schemata and then coding the fuck out of them into something original, beautiful, and useful.

            Let them have their fun. They will tire of their plastic toy lawnmowers, and the tools they use won't replace actual thought. The sad thing is: They'll never learn how to think.

            • svara 9 hours ago
              > The sad thing is: They'll never learn how to think.

              Drawing a sense of superiority out of personal choices or preferences is a really unfortunate human trait; particularly so in this case since it prevents you from seeing developments around you with clarity.

              • globnomulous 5 hours ago
                I agree with the person you're answering. LLM-assisted coding is like reading a foreign language with a facing translation: most students who do this will make the mistake of thinking they've translated and understood the original text. They haven't. People are abysmal at maintaining an accurate mental accounting of attribution, authorship, and ownership.
          • svara 10 hours ago
            From the way you describe it, our process does not sound that different, except that this

            > If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.

            ... is exactly how this often works for me.

            If you don't get any value out of this at all, and have worked with SOTA tools, we must simply be working in very different problem domains.

            That said I have used this workflow successfully in many different problem domains, from simple CRUD style apps to advanced data processing.

            Two recent examples to make it more concrete:

            1) Write a function with parameter deckName that uses AnkiConnect to return a list of dataclasses with fields (...) representing all cards in the deck.

            Here, it one-shots it perfectly and saves me a lot of time sifting through crufty, incomplete docs.

            2) Implement a function that does resampling with trilinear interpolation on 3d instance segmentation. Input is a jnp array and resampling factor, output is another array. Write it in Jax. Ensure that no new instance IDs are created by resampling, i.e. the trilinear weights are used for weighted voting between instance IDs on each output voxel.

            This one I actually worked out on paper first, but it was my first time using Jax and I didn't know the API and many of the parallelization tricks yet. The LLM output was close, but too complex.

            I worked through it line by line to verify it, and ended up learning a lot about how to parallelize things like this on the GPU.

            At the end of the day it came out better than I could have done it myself because of all the tricks it has memorized and because I didn't have to waste time looking up trivial details, which causes a lot of friction for me with this type of coding.

        • noduerme 15 hours ago
          >> AI coding allows me to build tools that solve real world problems for me much faster.

          If you can't / won't / don't read and write the code yourself, can I ask how you know that the code written for you is working correctly?

          • svara 15 hours ago
            I do read it. In my experience the project will quickly turn into crap if you don't. You do need to steer it at a level of granularity that's appropriate for the problem.

            Also, as I said, I've been coding for a long time. The ability to read the code relatively quickly is important, and this won't work for early novices.

            The time saving comes almost entirely from having to type less, having to Google around for documentation or examples less, and not having to do long debugging sessions to find brainfart-type errors.

            I could imagine that there's a subset of ultra experienced coders who have basically memorized nearly all relevant docs and who don't brainfart anymore... For them this would indeed be useless.

            • noduerme 14 hours ago
              I mean, I'm curious what kind of code it's saving you time on. For me, it's worse than useless, because no prompt I could write would really account for the downwind effects in systems that have (1) multiple databases with custom schema, (2) a back-end layer doing user validations while dispatching data, (3) front-end visual effects / art / animation that the LLM can't see or interpret, all working in harmony. Those may be in 4 different languages, but the LLM really just can't get a handle on what's going on well enough. Just ends up hitting its head on a wall or writing mostly garbage.

              I have not memorized all the docs to JS, TS, PHP, Python, SCSS, C++, and flavors of SQL. I have an intuition about what question I need to ask, if I can't figure something out on my own, and occasionally an LLM will surface the answer to that faster than I can find it elsewhere... but they are nowhere near being able to write code that you could confidently deploy in a professional environment.

              • maccard 12 hours ago
                I’m far more in the camp of not AI than pro LLM but I gave Claude the HTML of our jira ticket and told it we had a Jenkins pipeline that we wanted to update specific fields on the ticket of using python. Claude correctly figured out how we were calling python scripts from Jenkins, grabbed a library and one shorted the solution in about 45 seconds. I then asked it to add a post pipeline to do something else which it did, and managed to get it perfectly right.

                It was probably 2-3 hours work of screwing around figuring out issue fields, python libraries, etc that was low priority for my team but causing issues on another team who were struggling with some missing information. We never would have actually tasked this out, written a ticket for it, and prioritised it in normal development, but this way it just got done.

                I’ve had this experience about 20 times this year for various “little” things that are attention sinks but not hard work - that’s actually quite valuable to us

                • shsusha 11 hours ago
                  > It was probably 2-3 hours work of screwing around figuring out issue fields

                  How do you know AI did the right thing then? Why would this take you 2-3 hours? If you’re using AI to speed up your understanding that makes sense - I do that all the time and find it enormously useful.

                  But it sounds like you’re letting AI do the thinking and just checking the final result. This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.

                  • maccard 9 hours ago
                    > How do you know AI did the right thing then?

                    Because I tested it, and I read the code. It was only like 40 lines of python.

                    > Why would this take you 2-3 hours?

                    It's multiple systems that I am a _user_ of, not a professional developer of. I know how to use Jira, I'm not able to offhand tell you how to update specific fields using python - and then repeat for Jenkins, perforce, slack. Getting credentials in (Claude saw how the credentials were being read in other scripts and mirrored that) is another thing.

                    > This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.

                    As I said above, it's 30 lines of code. I did put my name beind it, it's been running on our codebase on every single checkin for 6 months, and has failed 0 times in that time (we have a separate report that we check in a weekly meeting for issues that were being missed by this process). Again, this isn't some massive complicated system - it's just glueing together 3/4 APIs in a tiny script in 1/10 of the time that it took me to do it. Worst case scenario is it does exactly what it did before - nothing.

                  • noduerme 10 hours ago
                    Hah, even the concept of putting your name behind something is so great. It's kind of the ultimate protest against LLMs and social media, isn't it?
                • noduerme 11 hours ago
                  I've used it for minor shit like that, but then I go back and look at the code it wrote with all its stupid meandering comments and I realize half the code is like this:

                  const somecolor='#ff2222'; /* oh wait, the user asked for it to be yellow. Let's change the code below to increase the green and red /

                  / hold on, I made somecolor a const. I should either rewrite it as a var or wait, even better maybe a scoped variable! /

                  hah. Sorry I'm just making this shit up, but okay. I don't hire coders because I just write it myself. If I did, I would assign them all* kinds of annoying small projects. But how the fuck would I deal with it if they were this bad?

                  If it did save me time, would I want that going into my codebase?

                  • maccard 9 hours ago
                    I've not found it to be bad for smaller things, but I've found once you start iterating on it quickly devolves into absolute nonsense like what you talked about.

                    > If it did save me time, would I want that going into my codebase?

                    Depends - and that's the judgement call. I've managed outsourcers in the pre-LLM days who if you leave them unattended will spew out unimaginable amounts of pure and utter garbage that is just as bad as looping an AI agent with "that's great, please make it more verbose and add more design patterns". I don't use it for anything that I don't want to, but for so many things that just require you to write some code that is just getting in the way of solving the problem you want to solve it's been a boon for me.

              • svara 10 hours ago
                I've also not had great experiences with giving it tasks that involve understanding how multiple pieces of a medium-large existing code base work together.

                If that's most of what you do, I can see how you'd not be that impressed.

                I'd say though that even in such an environment, you'll probably still be able to extract tasks that are relatively self contained, to use the LLM as a search engine ("where is the code that does X") or to have it assist with writing tests and docs.

                • jason_oster 5 hours ago
                  Your conclusion is spot on. Fuzz generators excel at fuzzy tasks.

                  "Convert the comments in this DOCX file into a markdown table" was an example task that came up with a colleague of mine yesterday. And with that table as a baseline, they wrote a tool to automate the task. It's a perfect example of a tool that isn't fun to write and it isn't a fun problem to solve, but it has an important business function (in the domain of contract negotiation).

                  I am under the impression that the people you are arguing with see themselves as artisans who meticulously control every bit of minutiae for the good of the business. When a manager does that, it's pessimistically called micromanagement. But when a programmer does that, it's craftsmanship worthy of great praise.

          • mlrtime 13 hours ago
            Because it does what I want it to do?

            Not sure how this is so hard to understand. If you have closed source software, how do you know its's working?

          • fragmede 15 hours ago
            Same way you test code you wrote by hand. In-place and haphazardly, until you have it write unit tests so you can have it done more methodically. If it hallucinates a library or function that doesn't exist, it'll fail earlier in the process ; compilation).
            • noduerme 15 hours ago
              I've used Claude to write code, and it is much harder to test that code than it is to test code "haphazardly" as I write it myself. Reason being, I can test mine after each new line I write and make sure that line is doing what I intend it to do. After Claude writes a whole set of functions, it could take hours to test all the potential failure modes.

              BTW, if it doesn't take you hours to test the failure modes, you're not thinking of enough failure modes.

              The time savings in writing it myself has a lot to do with this. Plus I get to understand exactly why each line was written, with comments I wrote, not having to read its comments and determine why it did something and whether changing that will have other ramifications.

              If you're doing anything larger than a sample React site, it's worth taking the time to do it yourself.

              • tokioyoyo 10 hours ago
                Well, you could also generate the tests by CC, check them to make sure they’re legitimate, then let it implement it?

                The main key in steering Claude this month (YMMV), is basically giving tasks that are localized, can be tested out and not too general. Then you kinda connect the dots in your head. Not always, but you can kinda get gist of what works and what doesn’t.

        • ErroneousBosh 13 hours ago
          > AI coding allows me to build tools that solve real world problems for me much faster.

          But it can't actually generate working code.

          I gave it a go over the Christmas holidays, using Copilot to try to write a simple program, and after four very frustrating hours I had six lines of code that didn't work.

          The problem was very very simple - write a bit of code to listen for MIDI messages and convert sysex data to control changes, and it simply couldn't even get started.

          • Rodeoclash 13 hours ago
            I'm sure someone is about to jump in and tell you why you're doing it wrong, but I'm in a similar position to you. I spent the last few days using the AI to help me pull together evidence for our ISO audit and while it didn't do a bad job, it was rife with basic errors. Simple things like consistently formatting a markdown document would work 9/10 times with the other time having it ignore the formatting, or deciding to rewrite other bits of the document for no reason.
          • tokioyoyo 10 hours ago
            Yeah, unfortunately quality of tooling varies heavily. Like a range of producing garbage, to working code. Claude Code got significantly good in the last couple of months, and it’s been noticeable. I’ve been trying to plug LLMs into my workflow throughout the year, to make sure I don’t fall behind the industry. And this last month was it when it “clicked”. It works in large and small projects as long as you kinda know how to localize the tasks.
          • shlant 12 hours ago
            I know "try this other tool" is probably an eye-roll-worthy response, but as someone who's not a programmer but is in IT and has to write some scripts every once in a while and has a lot of AI-heavy dev friends - all I've ever heard about Copilot is that it's one of the worst.

            I recently used Claude for a personal project and it was a fairly smooth process. Everyone I know who does a lot of programming with AI uses Claude mostly.

      • dota_fanatic 20 hours ago
        > "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.

        Take game programming: it takes an immense amount of work to produce a game, problems at multiple levels of abstraction. Programming is only one aspect of it.

        Even web apps are much, much more than the code backing them. UIUX runs deep.

        I'm having trouble understanding why you think programming is the entirety of the problem space when it comes to software. I largely agree with your colleagues; the fun part for me, at this point in my career, is the architecture, the interface, the thing that is getting solved for. It's nice for once to have line of sight on designs and be able to delegate that work instead of writing variations on functions I've written thousands if not tens of thousands of times. Often for projects that are fundamentally flawed or low impact in the grand scheme of things.

        • hackable_sand 16 hours ago
          I sense a serious disconnect. I don't go to the dojo once.
        • zer00eyz 20 hours ago
          If this were another industry...

          I don't know why people build houses with nail guns, I like my hammer... Whats the point of building a house if you're not going to pound the nails in yourself.

          AI tooling is great at getting all the boiler plate and bootstrapping out of the way... One still has to have a thoughtful design for a solution, to leave those gaps where you see things evolving rather than writing something so concrete that you're scrapping it to add new features.

          • andy99 14 hours ago
            This appears to misunderstand both construction and software development, nail guns and LLMs are not remotely parallel.

            You’re comparing a deterministic method of quickly installing a fastener with something that nondeterministically designs and builds the whole building.

          • Shog9 19 hours ago
            Nail guns are great. For nails that fit into them and spaces they fit into. But if you can't hit a nail with a hammer, you're limited to the sort of tasks that can be accomplished with the nail guns and gun-nails you have with you.

            This is the way with many labor-saving devices.

            • nosianu 15 hours ago
              That's the problem with solving a casually made metaphor instead of sticking to the original question. Since when is AI assisted coding only when you do 100% AI and not a single line yourself? That is only the extreme end! Same with the nails actually. I doubt the builders don't also have and use hammers.

              > This is the way with many labor-saving devices.

              I think that's more the problem of people using only the extremes to build an argument.

          • zwnow 17 hours ago
            You can pick apart a nail gun and see how it exactly works pretty easily. You cant do that with LLMs. Also a nail gun doesn't get less accurate the more nails you shoot one after another, a LLM does get less accurate the more steps it goes through. Also a nail gun shoots straight and not in random directions as that would be considered dangerous. A LLM does shoot into random directions. The same prompt will often yield different results. With a nail gun you can easily pull the plug and you wont have to verify if the nail got placed correctly for an unreasonable amount of time, with LLM output you have to verify everything which takes a lot of time. If an LLM really is such a great tool for you I fear you are not verifying everything it does.

            If the boilerplate is that obvious why not just have a blueprint for that and copy and paste it over using a parrot?

            Also I dont have a nail gun subscription and the nail gun vendor doesnt get to see what I am doing with it.

            • stavros 16 hours ago
              You mention a thousand ways the analogy breaks when you take it too far, but you didn't address the actual (correct) point the analogy was making: Some people don't enjoy certain parts of the creative process, and let an LLM handle them. That's all.
              • exceptione 15 hours ago

                  > Some people don't enjoy certain parts of the creative process, 
                Sure

                  >  and let an LLM handle them. 
                This is probably the disputed part. It is not a different way of development, and as such it should not be presented like that. In software, we can use ready-made components, choose between different strategies, build everything in a low-level language etc. The trade-offs coming with each choice is in principle knowable; the developer is still in control.

                LLMs are nothing like that. Using a LLM is more akin to management of outsource software development. On the surface, it might look like you get ready-made components by outsourcing it to them, but there is no contract about any standard, so you have to check everything.

                Now if people would present it like "I rather manage an outsourcing process than doing the creative thing" we would have no discussion. But hammers and nails aren't the right analogies.

                • mlrtime 13 hours ago
                  >LLMs are nothing like that. Using a LLM is more akin to management of outsource software development.

                  You're going to have to tell us your definition of 'Using a LLM' because it is not akin to outsourcing (As I use it).

                  When I use clause, I tell it the architecture, the libraries, the data flows, everything. It just puts the code down which is the boring part and happens fast.

                  The time is spent mostly on testing, finding edge cases. The exact same thing if I wrote it all myself.

                  I don't see how this is hard for people to grasp?

                  • exceptione 12 hours ago

                      > 'Using a LLM' because it is not akin to outsourcing (As I use it).
                    
                    The things you do with an LLM are precisely what many other IT-firms do when outsourcing to India. Now you might say that this would be bonkers, but that is also why you hear so often that LLM's are the biggest threat to outsourcing instead of software development in general. The feedback cycle with an LLM is much faster.

                      > I don't see how this is hard for people to grasp?
                    
                    I think I understand you, and I think you have/had something else in mind when hearing the term outsourcing.
                • stavros 15 hours ago
                  I don't think people use an LLM and say "I wrote some code", but they do say "I made a thing", which is true. Even if I use an LLM to make a library, and I decide the interfaces, abstractions, and algorithms, it was still me who did all that.
                • jason_oster 4 hours ago
                  > Using a LLM is more akin to management of outsource software development.

                  This is a straw man argument. You have described one potential way to use an LLM and presented it as the only possible way. Even people who use LLMs will agree with you that your weak argument is easy to cut down.

              • zwnow 15 hours ago
                An analogy doesnt work if it has thousands of flaws.
                • stavros 15 hours ago
                  You can't stretch it until it breaks and then say "see? It broke, it wasn't perfect". It works for the purpose it was made, and that's all it needed to work for.
                  • zwnow 6 hours ago
                    So the analogy is okay if it supports your argument but a counter analogy isn't okay if it doesn't support your argument, got it.
                    • stavros 6 hours ago
                      This 4D reading comprehension chess is too much for me, sorry.
        • thefaux 20 hours ago
          Sure, but I prefer to work on projects that are fundamentally sound and high impact. Indeed, I have certainly noticed a pattern that very often ai enthusiasts exalt its capabilities to automate work that appears to be of questionable value in the first place, apart from the important second order property of keeping the developer sheltered and fed.
          • mlrtime 13 hours ago
            Can you tell us these patterns (should be easy) that have questionable value but yet they are being paid well enough for rent/food?
      • hansvm 19 hours ago
        Programming is a ton of fun. There are competing concerns though.

        I recently wrote a 17x3 reed-solomon encoder which is substantially faster on my 10yo laptop than the latest and greatest solution from Backblaze on their fancy schmancy servers. The fun parts for me were:

        1. Finally learning how RS works

        2. Diving in sufficiently far to figure out how to apply tricks like the AVX2 16-element LUT instruction

        3. Having a working, provably better solution

        The programming between (2) and (3) was ... fine ... but I have literally hundreds of other projects I've never shipped because the problem solving process is more enjoyable and/or more rewarding. If AI were good enough yet to write that code for me then I absolutely would have used it to have more time to focus on the fun bits.

        It's not that I don't enjoy coding -- some of those other unshipped projects are compilers, tensor frameworks, and other things which exist purely for the benefit of programmer ergonomics. It's just that coding isn't the _only_ thing I enjoy, and it often takes a back seat.

        I most often see people with (what I can read into) your perspective when they "think" by programming. They need to be able to probe the existing structure and inject their ideas into the solution space to come up with something satisfactory.

        There's absolutely nothing wrong with that (apologies if I'm assuming to much about the way you work), but some people work differently.

        I personally tend to prefer working through the hard problems in a notebook. By the time the problem is solved, its ideal form in code is obvious. An LLM capable of turning that obvious description into working code is a game changer (it still only works like 30% of the time, and even then only with a lot of heavy lifting from prompt/context/agent structure, so it's not quite a game changer yet, but it has potential).

        • foota 17 hours ago
          If you're curious, you might also be interested in Cauchy-Reed Solomon coding. This converts Galois field operations into XORs by treating elements of GF(2^n) as bit matrices. The advantage then is that instead of doing Galois field operations, you can just xor things for much better performance. The canonical paper is https://web.eecs.utk.edu/~jplank/plank/papers/CS-05-569.pdf.

          https://www.usenix.org/system/files/fast19-zhou.pdf is a more modern paper that goes into some related problems of trying to reduce the number of XOR operations needed to encode data.

          • hansvm 7 hours ago
            That was a fun little rabbit-hole with a nice 20x (not backward-compatible with my existing memory layout) speedup. Thank you!
      • driverdan 9 hours ago
        > I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects

        When writing code in exchange for money the goal is not to write code, it's to solve a problem. Care about the code if you want but care about solving the problem quickly and effectively more. If LLMs help with that you should be using them.

        On personal projects it depends on your goal. I usually want the tool more than whatever I get from writing code. I always read whatever an LLM spits out to make sure I understand it and confirm it's correct but why wouldn't I accelerate my personal tool development as well?

      • stared 12 hours ago
        Let me explain my perspective as I do vibe coding for some side projects. AI (and even if it works correctly) is a thing filling the blanks. Still, the value depends on how much work you put in.

        For recent ones, it is a interactive visualization of StarCraft 2 (https://github.com/stared/sc2-balance-timeline). Here I could do it myself (and spend way more time than I want to admit on refactoring, so code looks OK-ish), but unlikely I would have enough time to do so. I had the very idea a few years ago, but it was just too much work for a side project. Now I did it - my focus was high-level on WHAT I want to do and constant feedback on how it looks, tweaking it a lot.

        Another recent is "a project for one" of a Doom WAD launcher (https://github.com/stared/rusted-doom-launcher). Here I wouldn't be able to do it, as I am not nearly as proficient in Rust, Tauri, WADs, etc. But I wanted to create a tool that makes it easy to to launch custom Doom maps with ease of installing a game on Steam.

        In both cases the pattern is the same - I care more on the result itself that its inner workings (OK, for viz I DO care). Yes, it takes away a lot of experience of coding oneself. But it is not something entirely different - people have had the same "why use a framework instead of writing it yourself", "why use Python when you could have used C++", "why visiting StackOverflow when you could have spend 2 days finding solution yourself".

        With side projects it is OUR focus on what we value. For someone it is writing low-level machine code by hand, even it it won't be that useful. For some other, making cute visual. For someone else, having an MVP that "just works" to test a business idea.

        • Dumblydorr 11 hours ago
          That’s a great SC2 balance tool, kudos for that! I’ve been out of touch with the scene, are the balance changes recently for the good of the pro scene? I only watch ASL these days, no time for GSL
          • stared 11 hours ago
            Thanks!

            Yes, balance updates make the game live.

            For watching current games, I cannot recommend better than Lowko (https://www.youtube.com/@LowkoTV) - he covers the main matches, and make a commentary in a style I like.

      • Jach 19 hours ago
        I enjoy the programming, and the problem solving, but only sometimes the typing. Advent of Code last month was fun to do in Common Lisp, I typed everything but two functions myself, and only consulted with the subreddit and/or the AI on a couple problems. (Those two functions were for my own idea of using A-star over Morton Numbers, I wrote about those numbers with some python code in 2011 and didn't feel like writing the conversion functions again. It didn't work out anyway, I had to get the hint of "linear programming" and a pointer to GLPK, which I hadn't used before, so I had the AI teach me how to use it for standard sorts of LP/MIP problems, and then I wrote my own Lisp code to create .lp files corresponding to the Advent problem and had GLPK execute and give the answers.)

        If it's a language I don't particularly enjoy, though, so much the better that the AI types more of it than me. Today I decided to fix a dumb youtube behavior that has been bugging me for a while, I figured it would be a simple matter of making a Greasemonkey script that does a fetch() request formed from dynamic page data, grabs out some text from the response, and replaces some other text with that. After validating the fetch() part in the console, I told ChatGTP to code it up and also make sure to cache the results. Out comes a nice little 80 lines or so of JS similar to how I would have written it setting up the MutationObserver and handling the cache map and a promises map. It works except in one case where it just needs to wait longer before setting things up, so I have it write that setTimeout loop part too, another several lines, and now it's all working. I still feel a little bit of accomplishment because my problem has been solved (until youtube breaks things again anyway), the core code flow idea I had in mind worked (no need for API shenanigans), and I didn't have to type much JavaScript. It's almost like using a much higher level language. Life is too short to write much code in x86 assembly, or JavaScript for that matter, and I've already written enough of the latter that I feel like I'm good.

      • gcanyon 10 hours ago
        Each level of programming abstraction has taken us a step further from the bare metal, as they used to say. In machine code you specify what goes where at the memory/register level. Assembly gives you human-readable mnemonics. C abstracts away direct control, but you still manually allocate memory. Java and C# abstract away memory management but you still declare types. Python and JavaScript abstract away type declarations but you still define variables and program structure. With AIs define your end goal in plain language and then you either have to understand code to edit, or literally depend on the machine to fix everything.

        In a sense it's like SQL or MiniZinc: you define the goal, and the engine takes care of how to achieve it.

        Or maybe it's like driving: we don't worry about spark advance, or often manual clutches, anymore, but LLMs are like Waymo where your hands aren't even on the steering wheel and all you do is specify the destination, not even the route to get there.

        • blibble 9 hours ago
          "AI" is not an abstraction like a compiler or a library

          it's outsourcing to an unreliable body shop where they barely speak English and the weekly attrition rate is 300%

      • scrollop 18 hours ago
        Devils advocate-

        "I love complicated mathematical questions, and love doing the basic multiplication and division calculations myself without a calculator. I don't understand why people would use a calculator for this."

        "I love programming, and don't understand why people would use C++ instead of using machine lamguage. You get deep down close to the hardware, such a good feeling, people are missing out. Even assembly language is too much of a cheat."

        In the other hand - people still knit, I assume for the enjoyment of it.

      • bpye 16 hours ago
        > That's how I feel with programming, and sometimes I feel like I'm taking crazy pills when I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects. Don't they miss the feeling of..... programming? Am I the weird one here?

        I've played with using LLMs for code generation in my own projects, and whilst it has sometimes been able to solve an issue - I've never felt like I've learned anything from it. I'm very reluctant to use them for programming more as I wouldn't want my own skills to stagnate.

      • hamandcheese 20 hours ago
        I enjoy programming, sure. But I also enjoy the act of creation, which as others here point out, is usually much more than just programming.
      • markus_zhang 17 hours ago
        I do use ChatGPT for side projects, but only as a last resort and always as a discussion partner, not a code writer. I always tell it beforehand “no code just discussion”. The fun is in figuring out as much stuffs as possible by myself and write the implementations, and I’m not paying someone to take my fun.

        But again my projects are more research than product, so maybe it’s different.

        • codr7 15 hours ago
          What good is a discussion partner that only has one goal, to keep the discussion going?

          I suspect you've found a new hobby, not improved the existing one.

          • markus_zhang 14 hours ago
            ChatGPT helps me to understand some topics. It’s the same hobby but with ChatGPT replacing stackoverflow. So far it’s surprisingly better.
        • lorenzo1860 13 hours ago
          really agree! That is really beneficial for building intuition and getting some inspiration
      • michaelcampbell 12 hours ago
        > Don't they miss the feeling of..... programming? Am I the weird one here?

        Our company is "encouraging" use of LLMs through various carrots and sticks; mostly sticks. They put out a survey recently asking us how we used it, how it's helped, etc. I'll probably get fired for this (I'm already on the short list for RIFs due to being remote in a pathological RTO environment and being easily the eldest developer here, but...), but I wrote something like:

        "Most of us coders, especially older ones, are coders because we like coding. The amount of time and money being put spent to make coders NOT CODE is incredible."

      • travisgriggs 13 hours ago
        It depends.

        I like programming. Quite a bit. But the modern bureaucratic morass of web technologies is usually only inspiring in the small. I do not like the fact that I have to balance so many different languages and paradigms to get to my end result.

        It would be a bit like a playwright aficionado saying “I really love telling stories through stage play” only to discover that all verbs used in dialogue had to be in Japanese, nouns are a mix of Portuguese and German, and connecting words in English. And talking to others to put your play on, all had to be communicated in Faroese and Quechua.

      • andy99 14 hours ago
        I’m convinced that some people are overly susceptible to the “preference optimized” nature of AI output and end up completely blind to its quality and usefulness.

        Not to say it’s useless garbage, there is some value for sure, but it’s nowhere near as good as some people represent it to be. It’s not an original observation, but people end up in a “folie a deux” with a chatbot and churn out a bunch of mediocre stuff while imagining they’re breaking new ground and doing some amazing thing.

      • js8 20 hours ago
        > Am I the weird one here?

        Yes. I think it depends on one's goals.

        You can ask, in the same vein, why use Python instead of C? Isn't the real joy of programming in writing effective code with manual memory management and pointers? Isn't the real joy in exploring 10 different libraries for JSON parsing? Or in learning how to write a makefile? Or figuring out a mysterious failure of your algorithm due to an integer overflow?

        TBH I am not sure AI is better either (see https://youtube.com/shorts/QZCHax14ImA), but it's probably gonna get figured out.

      • throwaway98797 9 hours ago
        solving users problems vs solving technical problems is likely where your confusion lies
      • ErroneousBosh 13 hours ago
        > Don't they miss the feeling of..... programming?

        I feel kind of the same when I read about people wanting self-driving cars. What's the advantage of them? Why would it be helpful?

      • wickedsight 16 hours ago
        ADHD is a thing for some people though. What works for one person might not work for other people. I have a friend who spends probably 8-10 hours a day writing code, every single day. I just can't do this personally, therefore, my ideas/projects never actually go anywhere.

        AI tools allow me to do a lot of stuff within a short time, which is really motivating. They also automatically keep a log of what I was doing, so if I don't manage to work on something for weeks, I can quite easily get back in and read my previous thinking.

        It can also get very demotivating to read 10 stackoverflow discussions from a Google searches that don't solve my problem. This can cause me to get out of 'the zone' and makes it extremely hard to continue. With AI tools, I can rephrase my question if the answer isn't exactly what I was looking for and steer towards a working solution. I can even easily get in depth explanations of provided solutions to figure out why something doesn't work.

        I also have random questions pop up in my brain throughout the day. These distract me from my task at hand. I can now pop this question into an AI tool and have it research the answer, in stead of being distracted for an hour reading up on brake pads or cake recipes or the influence of nicotine on driving ability.

      • smaudet 20 hours ago
        This is why I have yet to use AI, and will probably never.

        It's either taking away the most important (or rewarding) thing I need to do (think) and just causing me more work, or it has replaced me.

        AI. Is. Not. Useful.

        • visarga 19 hours ago
          Everyone is so fixated on the output as the commodity, whether it’s a blog post or a piece of code, that they fail to see the interaction itself as the locus of value. You can still do your rewarding work in a chat session, it can force you to think, challenge your ideas, and if you introduce your own spices into the soup it won't taste like slop. I like to explain my ideas until the LLM "gets it" and then ask it to "formalize" them in a nice piece of text, which I consume later as a meditation to deepen my thinking. I can't stand passive media anymore, need to be able to push back to feel satisfied, but this is only possible on forums and in AI chats.
        • dangus 19 hours ago
          If you have yet to use it then you have no idea if it’s useful.

          We can agree all day long about the pitfalls of the technology, but you’ve never used it so you don’t know if it’s causing you more work or replacing you.

        • whattheheckheck 19 hours ago
          If you have never used it how can you say it's not useful
        • monerozcash 17 hours ago
          >This is why I have yet to use AI, and will probably never.

          >AI. Is. Not. Useful.

          Why waste time writing things like this? What's the point?

          • stavros 16 hours ago
            It's to give others a sense of how many people can hold a completely indefensible opinion based purely on feels.
        • rubidium 20 hours ago
          This is the attitude of someone who uses hand tools when power tools are available. Yes you loose the personal touch but also loose the potential efficiency. Still need to measure twice and cut once though.
          • wtetzner 17 hours ago
            I don't think this analogy really works. A power tool would be a programming language with a better set of abstractions, or a good library that solves a hard problem.

            AI is like delegating to a junior programmer that never learns or gets better.

          • js8 20 hours ago
            I don't like your analogy because there are good reasons for amateurs not to use the power tools (for real-world crafting). They are expensive and you can hurt yourself easier. This is very unlike using AI to help you build something faster.
            • dangus 19 hours ago
              The analogy still works and we get the idea.

              Maybe a better analogy might be a car with an automatic transmission, although that doesn’t capture the pitfalls of AI very well. It could be argued that a good automatic transmission has none of the serious downsides that AI has.

              Still, the general idea is sometimes getting stuff faster with less effort more automatically is more important than the “reward” of doing it yourself.

              • fragmede 17 hours ago
                If we're bringing up cars, the parallel to draw is with GPS and navigation. Do I know how to get anywhere without technology to guide me? Have I broken my brain because I've offloaded navigation to technology?
                • yurishimo 17 hours ago
                  Eh it depends on a lot more factors. Perhaps at the most extreme side of the first time you take a new route.

                  I use a GPS all the time, but only because it also shows me traffic, red light cameras, and potential hazards. I memorized the route after the first 2-3 drives but I keep using the gps for the amenities.

                  That said, I’m old enough to have used printed map directions and my time in Boy Scouts gave me the skills to read a paper map too.

          • bentaber 19 hours ago
            Loose vs lose. Ask the llm
      • rewgs 7 hours ago
        I'm with you. I've said it before, but: LLMs have made clear who does things for the process, and who does things for the result (obviously this is a spectrum, hardly anyone is 100% on either end).

        The amount of people who apparently just want the end result and don't care about the process at all has really surprised me. And it makes me unfathomably sad, because (extremely long story short) a lot of my growth in life can be summed up as "learning to love the process" -- staying present, caring about the details, enjoying the journey, etc. I'm convinced that all that is essential to truly loving one's own life, and it hurts and scares me to both know just how common the opposite mindset is and to feel pressured to let go of such a huge part of my identity and dare-I-say soul just to remain "competitive."

    • tpmoney 19 hours ago
      > I feel like this will get missed by the general public. What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

      It depends on what you're trying to do. I mean if the point of doing anything is a "feeling of accomplishment" why hire anyone to do anything you could do yourself. Why hire a builder to build your home? Why hire a mechanic to fix your car? Why pay a neighborhood kid to mow your lawn? Why hire a photographer for your wedding? Why hire a cook to make a meal? People hire others because even if they could do it themselves, they don't enjoy it but they need or still want the outcome for some reason or another.

      Would you want to hire someone to write your blog for you? No you probably wouldn't if its a personal blog, so likewise you probably wouldn't want to use an AI for it either. But if it's a marketing blog like almost every business seems to have on their website these days full of listicles and vague "did you know" marketing? Sure, it's probably already outsourced anyway, so why not use an AI.

      You probably don't want to be using an AI to generate artwork if you're aiming to make a painting that expresses your inner feelings. But if you're making a game and you suck at painting or drawing, you might hire it out, using an AI in that case isn't any different.

      • andrepd 12 hours ago
        > Why hire a builder to build your home? Why hire a mechanic to fix your car? Why pay a neighborhood kid to mow your lawn? Why hire a photographer for your wedding? Why hire a cook to make a meal?

        But precisely, "AI" is _NOT_ fixing my car or building my home or photographing my wedding!! It's writing a sludge of plausible-looking but empty slop that contaminates everything on the web, it's attempting to automate the visual arts, it's generating fake video that's getting harder and harder to distinguish from real one. It's automating things that SHOULD NOT be automated, and it's NOT automating things that should!

        • djeastm 8 hours ago
          >It's writing a sludge of plausible-looking but empty slop that contaminates everything on the web, it's attempting to automate the visual arts, it's generating fake video that's getting harder and harder to distinguish from real one.

          These are negative externalities, indeed, but the producer of the "goods" here does not feel those effects.

    • delichon 22 hours ago
      It reminds me of one of my dad's favorite dad jokes: "While you're up go to the bathroom for me." It's tough to delegate the catharsis of writing.
      • unyttigfjelltol 22 hours ago
        This attitude reminds of a particular 1850s-ish building in my area chock full of intricately hand-carved half-size wooden figurines. These were generated by true masters of their crafts. It’s obvious— work that only could be produced by a lifetime of devotion to a craft.

        And the industry is gone. No one could produce figurines like that at any worldly price, probably for the last 100 years. The world is less for it, but it doesn’t matter, art follows different more efficient technologies and methods.

        I sympathize with these artisans of the written word. But they’re all wrong, they’re dinosaurs who don’t know it. I myself was one, churning on high-value bespoke written work. The economic model is wrong, we’re the expert 1850s figurine crafters, adapt or … burn out I guess.

        • apsurd 21 hours ago
          it's such a _wrong_ conclusion.

          Art for its own sake. Say something. Experience having said something.

          Economic value is the least of it. i get why economic value is the only thing that matters. we made the world this way. i get it.

          but also: art for its own sake. say something. experience the saying of the something.

          • tpmoney 19 hours ago
            Not all art is intended to "say something". Sometimes you want a piece of artwork because you want a portrait of your D&D character for the character sheet. Sometimes you want some background for some color in a slide. Sometimes you want a thing you like in a different style[1]. And don't get me wrong, if you're able to hire an artist for it, and you want to, you absolutely should. But there's nothing wrong IMO with using a machine to do it for you either. You can hand cut dove tails for your drawers with a little practice, and I've done it. I'm also not going to judge someone for using a jig, or buying a pre-cut drawer. You can hand make ravioli with a flour, egg and a rolling pin. I've done it. It was incredible fresh pasta. It was also a ridiculous amount of work and I'm just as happy to buy pre-made ravioli from the store, or roll out the sheets with a pasta machine.

            [1]: https://www.inprnt.com/gallery/canadianturtle/pacman-ukiyoe/

            • scrollop 18 hours ago
              If you something is "art" then it almost by definition will "say something" if only the expression of an artist and their intention.

              If you are talking about the background colour of a slide, that is not "art", it's a simple choice.

              The portrait for your d&d character - if you used AI to generate just because you need any image and you don't care, you need a representation, then it may be difficult to classify that as "art". If you drew it, regardless of how bad it is, and you like and appreciate and connect with it, that is "art".

              Of course, we may all have our own definitions of "art"

          • phantasmish 8 hours ago
            AI Economics (of money, anyway) aren't what'll kill writing. The vast, vast majority of writing is already not done for money. Approximately nobody makes meaningful amounts of money writing.

            The existence of a sea of AI slop making it impossible to find or publicize writing is what will kill it.

            It's purely a loss.

            • apsurd 7 hours ago
              Great point. Reminds me of the misinformation wars. People who respect evidence-based knowledge need to defend against a never ending onslaught of conspiracy theories. They only need to slip up once, while the other side of doubt can run their engine forever with the latest flavor.
        • komali2 22 hours ago
          That sounds like a beauty building, and perhaps the market for such things is quite small now, but those artisans still exist. In Japan there's plenty of master carpenters and woodblock artists, including one American man that moved there like 40 years ago and dedicated the remainder of his life to the craft.

          In Taiwan I've met indigenous woodworking artists. They sell stuff in markets all the time, plenty of it incredibly intricate. Incidentally, many temples here are also covered in beautifully layered granite carvings.

        • bccdee 20 hours ago
          Nah, that misunderstands writing pretty fundamentally. Language is how we express ideas. An LLM that could write as well as a person would need to be able to think as well as a person, and they just don't. That's why nobody's publishing LLM books, and why the only LLM articles are SEO slop and advertorials.

          We've had LLMs for years. Image models and coding agents have gotten remarkably good, and their output is all over the place. So where is the AI writing? Outside of automated summaries, formulaic essays, and overly verbose LinkedIn posts, nowhere.

          • shakna 20 hours ago
            Plenty of dime novels published with AI as well. Resulting in a depressed fiction marketplace, where people are even more hesitant to buy from an unknown author.
          • fragmede 19 hours ago
            Someone hasn't looked at the Kindle bookstore lately.
            • bccdee 11 hours ago
              Amazon self-publishing is full of spam, obviously, but nobody buys those. They're the book equivalent of a broken 3-star GitHub repo with 2 commits. If that was all that coding agents were used for, I'd call those a failed use case too.
        • syphia 18 hours ago
          Writing may not be produced for the prestiege of its result, but written words still serve an essential purpose for communication. I think that, as with any essential art, e.g. cooking, people will experiment with it to fit their needs.

          Writing is also peculiar in that it is easily referenceable with a deep history, so it serves as a way to compare one's own ideas to others. Memes are similar in principle, but tend towards esotericism and ephemerality in a balkanized internet.

        • smaudet 20 hours ago
          No.

          What will actually happen, likely, is a complete death of writing. Not just that the craft is gone, but that art is gone.

          What is the point of creating anything if it has no meaning? And likewise, there is no economic value to it either.

          So there will simply be no art, and paradoxically any true art will simply be so ridiculously expensive and unaffordable that nearly nobody will benefit from it any more...

        • trinsic2 21 hours ago
          If you are not doing art for arts sake, then I think you are missing the point. You can create all you want for the creators economy and maybe many people need that right now. There will always be people that go to art to experience the awesomeness of life. Its just a differnt way of seeing the world, no reason to put everyone into a binary category of adapt or die. Its never going to be like that for everyone. I dont feel the need to compete for attention.
    • BurritoAlPastor 1 day ago
      > What’s the point in generating writing … if it gives next to zero feelings of accomplishment?

      Getting promoted, getting a better job, generating sales leads, things of that nature. A depressing number of blogs or LinkedIn posts exist only because the author is under some vague belief that it’s part of what they’re supposed to be doing to get ahead in their career.

      • toomuchtodo 1 day ago
      • Uehreka 23 hours ago
        This clearly was not what GP was talking about. Believe it or not, not all people do things for purely cynical reasons.
        • BurritoAlPastor 20 hours ago
          I think it’s perfectly germane. When a medium is both a means for making a good living _and_ a form of artistic expression, there’s a natural tension that emerges from people who pursue both those paths at once, in addition to the people who eschew either path entirely. Obviously many people avoid cynical reasons for doing the things they do - I’m among them - but you can’t fail to recognize that there’s always a demographic that doesn’t care about the art.
      • SamoyedFurFluff 23 hours ago
        I think somewhere along the way something has gone terribly wrong in the way we allocate capital to incentivize behavior. Somehow as a society we incentivize (aka distribute capital to) the people who educate our next generation and the people who care for our elders less than the people who smoke weed on podcasts and talk for hours into a camera or a microphone…
        • renegade-otter 20 hours ago
          Someone said "attention", and that is right. We are in the attention/extraction economy now. You are no longer a citizen - you are a walking number with a wallet.

          Did you see the NYC ball drop by any chance? It was plastered with ads. Ads on screen, ads on people, giant KIA ad below the ball that ruined the shot on purpose. Everything is a money grab now, because we are just eyeballs that see shit and buy it.

          If you think it's just the old me remembered things differently, here is 2002: https://www.youtube.com/watch?v=iB6OzLUQE3I

        • dexterdog 23 hours ago
          Those jobs don't scale at all so you can't have the ability for the top .001 pct make significant money
        • zmgsabst 20 hours ago
          On average and in total, we pay more to teach children than we do on podcasts.

          US podcasts have a total valuation of $8-9B with a revenue of $1.9B; total K-12 spending is $950B a year (about 500x higher). Education receives nearly three orders of magnitude more money per year.

          Most people sitting on a couch smoking weed on camera make little to nothing, while 3.8M teachers are paid an average of $65,000 per year.

          You’re comparing one-in-a-million outlier podcasts to the average case teacher in order to reverse the overwhelming amount more we put into education, both in total and on average.

        • rustystump 23 hours ago
          Attention is where capital is applied because the demand is so high for it. Society can control supply and demand about as well as it can control the weather.
    • noduerme 15 hours ago
      Yeah, sadly few people write or make art or music in order to fulfill their inner need to communicate. It's not like 99% of people who churn out "creative" work do so just to earn likes and followers and make money, but it's definitely way more than 50%. The thing is, what they do churn out is garbage, it's barely read by people who may themselves be garbage (or bots), it can be written by bots, and it, well, it just degrades the whole experience of writing and painting and playing music. Or rather, it pollutes it. But then, the rest of the world don't care either. Creatives themselves are a tiny portion of the population, and most of the population could not be bothered to read a book or a story, or find out who wrote a piece of music or painted a painting or designed the web page they're looking at. We call those people "consumers", but the truth is, lots of the people who currently strive to be "creators" are actually "consumers" who have very little to say and very little talent (but lots of tools to say it with).

      At some point, society and culture does separate the wheat from the chaff, but it takes a generation.

    • hsn915 22 hours ago
      For some, having an instagram profile with many followers is the accomplishment.
      • autoexec 20 hours ago
        Exactly. Some people don't have anything to say, and don't care about writing either. They just want attention, clicks/views, ad money, or followers. Others are passionate about something and want to share that passion, or just want to write for their own sake and it's just a bonus if someone else gets anything out of it.

        As the internet fills with slop, it'll only get harder to find the people who actually care about what they're putting online and not just the views or the ad revenue which is a shame because those are the types of people who make the internet interesting.

    • musebox35 16 hours ago
      I guess, the sense of accomplishment is very person dependent. I enjoy programming a lot, but it is easy to find people who would challenge themselves to scale the said website to a million users/X view per day. I don't know the why, probably there is no fixed meaning to existence and nature likes diversity.

      For me, the fun in programming also depends a lot on the task. Recently, I wanted to have Python configuration classes that can serialize to yaml, but I also wanted to automatically create an ArgumentParser that fills some of the fields. `hydra` from meta does that but I wanted something simpler. I asked an agent for a design but I did not like the convoluted parsing logic it created. I finally designed something by hand by abusing the metadata fields of the dataclass.field calls. It was deeply satisfying to get it to work the way I wanted.

      But after that, do I really want to create every config class and fill every field by myself for the several scripts/classes that I planned to use? Once the initial template was there, I was happy to just guide the agent to fill in the boilerplate.

      I agree that we should keep the fun in programming/art, but how we do that depends on the what, the who, and the when.

    • b00ty4breakfast 10 hours ago
      > What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

      Within the social milieu of industrialized society, mass-production is the point. It's harder to see that when the goods are essentials like clothing or food, where we obtain some utility and any artfulness is secondary to that utility. But, when we switch that around, and the artfulness of the good is the primary quality, it becomes very obvious that 10 trillion nearly-identical pictures of cake is just production for it's own sake.

    • zahlman 7 hours ago
      > I could generate some weird 70s sci fi art, make an Instagram profile around that, barrage the algorithm with my posts and rack up likes.

      It seems to me that it still takes a significant amount of luck to end up actually racking up likes that way.

    • mycall 10 hours ago
      Doesn't it depend how you use AI? Sometimes the imagination of gluing things together is the goal and is better than the sum of the parts, and if you don't know how to put the pieces together, AI can help support you there.

      If you agree with that, then like they say about prostitutes, it is just a matter of cost, appearance and complexity.

    • senko 8 hours ago
      > What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

      You answered yourself: to get something else than feeling of accomplishment.

      Not realizing there can be some is a failure of imagination.

    • Arn_Thor 20 hours ago
      > What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

      $$$

    • dev_l1x_be 16 hours ago
      I view generative AI as a visual synthesizer. It bridges the gap between my imagination and my lack of drawing skills. Just as synthesizers allowed one person to compose a full song without a band, AI allows me to get the exact images I want to create.
    • onion2k 14 hours ago
      The likes will give that instant dopamine but it will never fill that need of accomplishing something.

      The money from sponsors that comes with building a popular account goes a long way to mitigate that though.

    • tossandthrow 18 hours ago
      For me the issue is not so much the sense of accomplishment. Afterall, usually one needs to iterate with promoting until content, so I think many people can still get a sense of accomplishment.

      The issue is the loss of control and intimate knowledge about my own work.

    • parsimo2010 20 hours ago
      You can’t write your magnum opus without any practice. Some people write every day, possibly without enjoyment, so that they can create something noteworthy after they have developed their writing skills to be capable of it.
    • kelnos 22 hours ago
      > I could generate some weird 70s sci fi art, make an Instagram profile around that, barrage the algorithm with my posts and rack up likes. The likes will give that instant dopamine but it will never fill that need of accomplishing something.

      The value, I expect, to some people, is that if they can monetize that, then it's worthwhile to them, while letting them spend less time on it than if they had to do it themselves (or maybe they aren't artists and couldn't do it themselves, period).

      I personally find this kinda dishonest, uncreative, and not something I'd care to look at, but that's just me.

    • NooneAtAll3 19 hours ago
      > What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?

      "photocamera gives no feelings of accomplishment of creating a picture"

      and yet photography is an art of its own, and painting also has not disappeared

      ---

      or heck, "taking digital photos gives zero feelings of accomplishment because you didn't do developing in a redroom"

    • fragmede 23 hours ago
      Why indeed! Maybe after the singularity we can all be artists and musicians, doing things only for the love of the craft. Until then, we all got bills to pay.
      • socalgal2 21 hours ago
        I kind of wonder if there will be more or less. Maybe more doing it for fun but less over all?

        I watch a few DJs on Twitch. They seem like they are having fun but I'm pretty confident many of them would stop if there was no money in it. Maybe after they don't need the money they'd still do it. They need the money now and it's a fun way to get it.

        Similarly, I watch Veritasium and Kurzgestat. They both put lots of work in with teams of people. I think they both enjoy it, but some of that enjoyment comes from "making a living at it". If that disappeared, if they didn't need the living from it I wonder if they'd continue.

        • ultratalk 14 hours ago
          Both channels were started because of enjoyment and a sense of purpose, and in his latest video, Derek from Veritasium mentioned that he continued doing it, even after money began rolling in, because it was fun and it helped improve people.
    • sidrag22 15 hours ago
      i've been dabbling in writing for the past few weeks, and like anything im working on this past year, i feel the need to somehow route ai into the process...

      Writing however, is perhaps the area it really is quite literally nothing but a rubber duck for me. I think this past week I have likely written ~10k words, and suggestions from ai that ive taken straight up is at maybe like 10 words and even those were likely modified.

      I straight up hate all its suggestions for how to word stuff, maybe it has something to do with the amount of prompt responses that Ive read the past year. I imagine if i could generate a nice display of physical eyerolls ive done this past year, topping that list would be when a chatbot responds starting with "your touching on something" or some other output thats painfully common.

      Also i wouldn't say its worthless for my writing, it helps me kinda really pinpoint my weak portions, i just don't take any of their suggestions to strengthen it and find my own.

    • KaiserPro 14 hours ago
      This is a big point thats often missed.

      A lot of american buisness communication is packing, fluff or filler to either disguise a lack of knowledge or not make firm statements.

      Unless you are very careful, a standard LLM output will wrap a bunch of obvious points in lots of filler language. This works in business because the most toxic phrase you can utter is "I don't know". So we are used to verbal noise, and pick through the filler to glean clues to what the writer actually knows or wants to assert (or not assert)

      If you look at modern tech journalism, its either thinly re-worded PR pieces, or re-iteration of other not relevant opinions (see meta's AR glasses) You skim them to read to pull out interesting points (full colour, speakers, battery life, etc). The rest is just packing.

      But for "pleasure" reading, ie stuff thats not directly related to your chain of command, there is no use in reading that shit.

      Either its a story, where you need to impart emotion, or a. novel viewpoint. Or its an argument, where you also have a story, with some "facts" that also support an emotion.

      That requires some level of understanding of the subject matter, to make a coherent narrative, that doesn't feel empty.

      TLDR:

      LLMs generally produce Buisness passive, which is almost useless as a form of communication. Just send bullet points.

    • yunohn 17 hours ago
      Indeed, AI is basically being used to generate various sorts of “spam”. You can see it all over the internet, and feel the uncanny valley ick.
    • mlinhares 22 hours ago
      You have to talk to more tech bros, there's clearly not enough of them in your orbit LOL.
    • bschmidt1121234 11 hours ago
      [dead]
      • nyarlathotep_ 3 hours ago
        This mirrors my experience as well, when using things like ThreeJS.

        Any SOTA model can one-shot something that looks pretty similar to something from Three's examples, but things go south quickly when attempting to increase the complexity, even with pretty unambiguous instructions.

  • ciconia 16 hours ago
    > but I quickly concluded the writing suffered from the same uncanny valley effect as many AI-generated images: It all looks fine enough at first glance, but pay attention just a little longer, and something feels off.

    My thoughts exactly. In all my interactions with gen AI it was always the same: on the surface it looks pretty convincing, but once you look more deeply it's obviously non-sense. AI is great at superficial imitation of human-created work. It fails miserably at doing anything deeper.

    I think the biggest problem with AI is that most people just don't take the time or effort anymore to really look at an image, really read a text, or really listen to a piece of music, or a podcast. We've become so habituated to mindlessly consuming content that we can't even tell anymore if it's just a bunch of stochastic nonsense.

    • eamag 16 hours ago
      https://www.astralcodexten.com/p/ai-art-turing-test

      You can try to do a turning test. I've met several people claiming they can always find AI art, all of them can't do it (and AI art became even better now!)

    • stavros 16 hours ago
      This is comparing LLMs to the best humans, and concludes that LLM output is "nonsense". Well, LLM output is better than the average human's output, and there are a many of humans at and below the average.

      For four billion people, using an LLM to create things is a marked improvement. I'm not sure how you'd explain the phenomenally widespread use of LLMs otherwise.

      By the way: Can you tell whether my comment (this one) was written by an LLM or not?

      • bspammer 14 hours ago
        I think your comment was not written by an LLM.
        • Paria_Stark 11 hours ago
          Some people are going to spend 10 minutes refining a prompt to get a human looking 2 paragraphs message after rewriting half of it. Then they're going to be like GOTCHA I USED A LLM.
      • zacharyvoase 16 hours ago
        Not everyone is supposed to create.
        • stavros 16 hours ago
          The people who like creating don't use LLMs to do it, any more than the people who like cooking order takeout.
          • dragonwriter 16 hours ago
            I know plenty of people who like to create—but who also have a better technical understanding of LLMs—who use LLMs in their workflows (some even use LLMs finetune LLMs on their own work and then incorporate them into their workflows.)

            Most people who are non-technical (including most creators) have an extremely naive view of what LLMs are, mostly driven by what the media, and shills who are mostly targeting audiences that aren't creative are focused on, and their response to LLMs is shaped by that.

            • stavros 16 hours ago
              I should have said "the people who like to create don't use LLMs for the parts they like creating". I like making products that are easy to use and useful, I have LLMs write 100% of the code but I still do all the UX by hand, because that's what I enjoy.
    • stavros 16 hours ago
      This is comparing LLMs to the best humans, and concludes that LLM output is "nonsense". Well, LLM output is better than the average human's output, and there are a many of humans at and below the average.

      For four billion people, using an LLM to create things is a marked improvement.

    • baxtr 12 hours ago
      Which if you think about it makes a lot of sense.

      We’ve trained it so far on the outputs of our weird thinking process only.

  • analogpixel 1 day ago
    > I don’t write a daily blog to crank out a post every day. If that was the point, I’d have switched to AI long ago already. I write a daily blog to make sure I remember how to think.

    I'm always surprised when people say they use LLMs to do stuff in their Journal/Obsidian/Notion. The whole point of those systems is to make you think better, and then you just offload all of that to a computer.

    • roughly 1 day ago
      A friend noted that many people seem to be cosplaying their lives, and it’s hard not to see it once it’s pointed out.
      • rapidfl 19 hours ago
        After reading this comment, I feel leaning into the cosplaying will make me more productive/prolific. Many things I do not push thru on are because they seem superficial or a bit fake.

        Does not apply to all ppl but maybe there should be phases to cosplay hard. Then reflect and realign.

        • albert_e 17 hours ago
          I see your point.

          I am noticing that I am very quick to get excited about a thing and also very quick to lose motivation to pursue that new thing to a meangful level of understanding and mastery.

          Yesterday I was excited about something that I wanted to build a proof-of-concept of and blog about proudly. It might take 2-3days of intermittent effort juggling between other things but god was I excited to see it through.

          I reaped great dopamine learning the first 30% of the stuff by end of day.

          Today I wake up and am wondering what got me so excited yesterday. Of course I knew the basics of that now, parts of it seem obvious even, would anyone be really interested in me talking about it?

          If I threw my hat over the fence by cosplaying an active builder and blogger ... maybe I would have seen it through 3 days of commitment?

      • jackyinger 1 day ago
        Great point, this cosplay phenomenon goes far beyond LLM use.
        • Enginerrrd 10 hours ago
          “Vanity of vanities, all is vanity!”

          A tale as old as time.

      • yeahforsureman 22 hours ago
        Funny, I tend to use larping for similar analogies. Not a huge insight or anything, just crossed my mind... I guess there's also overlap, or at least some kind of similarity with cargo cults, too? :)

        EDIT: Trying to stay on topic and score some po--, cargo I mean...

        • trashburger 13 hours ago
          Call it larping, being performative etc. but it is a concept as old as time. People emulate the interface of successful people without actually having the implementation of successful people.
      • alansaber 23 hours ago
        Most people do, some just more blatantly
    • wtetzner 17 hours ago
      And what is the point of AI generated posts when everyone has access to LLMs themselves? They can also generate whatever text they want.
    • Kiro 18 hours ago
      > The whole point of those systems is to make you think better

      I'm not using LLMs for my notes but "think better" has never been a goal for me.

    • kettlecorn 23 hours ago
      Sometimes when working through difficult problems I will write pages of notes exploring a topic from a bunch of different angles until my brain is a bit exhausted.

      I've found LLMs work reasonably well to just copy-paste that blob of thoughts into to have them summarize the key points back to me in a more coherent form.

      • kaashif 23 hours ago
        I find value in going from the unstructured blob of notes into structured and coherent thoughts myself, rather than with an LLM.

        If I understand something well, I can write something coherent easily.

        What you describe feels to me along the lines of studying for an exam by photocopying a textbook over and over.

        • ragequittah 21 hours ago
          Usually studying a test book is reconceptualizing it in whatever way fits the way you learn. For some people that's notes, for some it's flash cards, for some it's reading the textbook twice and they just get it.

          To imagine LLMs have no use case here seems dishonest. If I don't understand a particularly hard part of the subject matter and the textbook doesn't expand on it enough you can tell the LLM to break it down further with sources. I know this works because I've been doing it with Google (slowly, very slowly) for decades. Now it's just way more convenient to get to the ideas you want to learn about and expand them as far as you want to go.

          • nunez 19 hours ago
            My issue with using LLMs for this use case is that they can be wrong, and when they are, I'm doing the research myself anyway.
    • jphorism 23 hours ago
      It's possible to do both. I write in a small field journal to think better, then periodically use Wispr Flow to quickly transcribe it to Obsidian (where I can use LLMs on the writing).
    • akoboldfrying 1 day ago
      IIUC, you believe that (a) using a tool like Notion is a useful brain-multiplying lever vs. struggling to keep everything in your head, but (b) using a tool like an LLM is a harmful brain-rotting exercise vs. struggling to do everything with your head.

      In your opinion, what is the differentiating factor?

      • lukeschlather 23 hours ago
        Using a tool like notion is organizing your thoughts into a coherent structure so that you can reexamine them. Running that coherent structure through an LLM replaces your structured thoughts with other thoughts you didn't have. You've gone to all this trouble to record your memories so they don't change, and then you run them through a machine that replaces your memories with randomly generated ones.

        I think AI is a great tool in certain circumstances, but this sounds like one of the clearest examples where it is brain rot.

  • nospice 1 day ago
    Some folks might enjoy writing for the sake of writing. But I'd wager they don't enjoy, say, plumbing for the sake of plumbing? When their toilet is clogged, they call a pro and don't treat it as a journey of personal growth.

    I think this works both ways. Your average plumber doesn't enjoy writing. It's something they might need to do from now and then, but if you give them a magic box that solves the problem, they're gonna be overjoyed. One less chore.

    Plumbing or writing, I don't think you can convince people not to take shortcuts by telling them "but the fact it's hard is what makes it worthwhile for you!"

    • hunterloftis 1 day ago
      My takeaway from the read wasn't that it was trying to convince anyone to take any particular action, and even emphasized that the mediocrity of AI output as more people use it will be a benefit to the smaller number of people doing their own thinking.
    • tolerance 23 hours ago
      My impression is that the author’s intended audience is other ‘Writers™’.

      But you can still make the case that writers and plumbers alike who enjoy doing their work for its own sake should embrace the reward effected by conquering the tedium of their trade and not take shortcuts.

    • jphorism 23 hours ago
      The plumbing metaphor is interesting. The most adjacent place I've heard this metaphor is in theology (Mako Fujimura)

      The goal of plumbing is to fix / repair; certainly it's possible to enjoy fixing and repairing. But is the joy of writing in "repairing ideas"? How is that a separate concept from creating new ones?

    • lifetimerubyist 21 hours ago
      So if you don't like plumbing, don't get a robot to do your plumbing and then claim you're a "vibe plumber".
      • doug_durham 17 hours ago
        Why not? (Other than they don't currently exist). I would absolutely get a robo-plumber. I have other tools for fixing things around the house. LLMs are just tools.
        • throaway54321 17 hours ago
          I think the issue is when you claim to have _done_ something yourself when in fact you didn’t. I think people have trouble differentiating real value vs what they think of as some flavor of carpetbagging
          • djeastm 12 hours ago
            Talented creators have hated poseurs since time immemorial.

            I think the issue is that it's gotten far easier to be a poseur than ever before.

        • lifetimerubyist 10 hours ago
          Nothing wrong with getting a robot plumber, I would do it too. But I wouldn't go around doing all my friend's plumbing and/or completely redo all of the plumbing in my home with it and then call myself a plumber (notwithstanding that plumbers are licensed professionals).
          • mkl 10 hours ago
            You've changed the hypothetical there. Calling yourself a vibe plumber and calling yourself a plumber seem like very different things.
    • xantronix 22 hours ago
      I think the crucial difference here is that literally anybody can write without taking on years of training or a tremendous financial burden.
      • stevedonovan 15 hours ago
        But are they writing? In that specific sense of high prestige communication? Sounds like stolen aura.

        Now, code by Gen AI is straightforward in comparison. Coding is not writing poetry, even if the lines also don't reach the right margin

    • n1b0m 17 hours ago
      This Software Architect would disagree with you :)

      https://www.linkedin.com/pulse/when-architects-plumb-why-you...

    • bodge5000 20 hours ago
      Sure, but if you outsource your plumbing you don't call yourself a variation of a plumber (a vibe-plumber I guess). The magic box is nothing new, it's just that it used to be operated by a human.
    • Sparkle-san 22 hours ago
      Why would I call a plumber when I can fix it myself for free and learn something new while I'm at it?
      • huimang 21 hours ago
        Sure, its's free*.

        - except the cost of materials and gas to drive to the hardware store, which you'll likely do twice or thrice as you realize you bought the wrong thing or need some other specific tool, that you'll use one time a year or less

        - except the cost of your own time away from personal projects and family

        - except the cost of hiring a plumber afterwards to professionally fix the problem you caused by DIY'ing it without the knowledge and experience that a professional brings

        • goopypoop 18 hours ago
          One does not simply "hire a plumber". You're suggesting to find, vet, arrange, brief, supervise, assess and pay a plumber. Your time is still gone and you still pay for materials and gas. Then you still might need to hire yet another plumber to fix it better anyway. If it's really bad you might need to hire a lawyer…
        • stavros 16 hours ago
          I see these arguments time and time again, and the two sides fail to see one simple thing: different people enjoy different things. The person who would hire a plumber would pay to not have to do plumbing, the person who'd do it themselves would pay to do it because it's fun for them.

          And now you understand each other!

      • Kiro 18 hours ago
        The fact that you ask such a question shows how different people are. It almost reads like a joke to me.

        I have zero interest to learn about plumbing and I would pay a professional even if I could do it myself to avoid any doubts or fears of messing it up.

    • wombatpm 22 hours ago
      I disagree. My toilet was leaking. I diagnosed the issue as a bad tank gasket. Replace said gasket and things are working fine. Could a licensed plumber have done it faster. Possibly. But a $300 service call was avoided.

      Now I had my house replumbed to relocate well tank, water softener, water heater, whole house filter, relocated washing machine and slop sink. That I left to the professionals. But I watched to learn more.

      If I outsource my thinking and can’t see how the box does it, what is the purpose?

      • tpmoney 19 hours ago
        This is being pedantic though. Is there nothing in your life that you outsource because you just don't enjoy doing, or it would take too much time away from what you actually want to do? It's highly unlikely you grind your own flour at home from wheat berries, even if you do make your own bread. Plenty of people enjoy home brewing, and plenty of people would rather buy a good brew from a local brewer and have no interest in brewing. Even if you're into wood working, you're probably not milling your own boards. You might like cooking, but you're probably not mining your own salt. There's a pretty good chance that you don't make your own pasta from scratch. And also a good chance you're not butchering your own meats. Someone I'm sure is finding artistry and fulfillment in doing these things for themselves. And for other people they're distractions that take away from doing what they actually want to do.
  • vunderba 1 day ago
    From the article:

    > When you’re stuck and sit there, thinking, trying to come up with what’s next, that’s the valuable part of writing.

    Not just what’s next, but the question of what to write in the first place.

    I’ve pointed it out before, but this idea of quiet contemplation is exactly where LLMs completely pratfall. The fewer details or instructions you give them, the less novel the output.

    I can’t speak for everyone, but when I want to write a new blog post on my site, it’s precisely the opposite. I dim the lights, sit quietly, and let the neurological brownian motion machine do its thing.

    • CuriouslyC 23 hours ago
      Polar opposite. Too many ideas, insufficient time/motivation to give them the treatment they deserve. I'd lovingly hand craft them, but honestly quality doesn't convert, content is a slot machine, gold will fail one day and shit will strike it big the next. When the people complaining about slop stop upvoting it maybe I'll come around.
  • jewel 22 hours ago
    "If it's not worth writing, it's not worth reading."

      - https://claytonwramsey.com/blog/prompt/
  • throw7 23 hours ago
    "...reading actual books in full might now be more valuable than it ever has been..."

    Call me old fashioned, but when has this been ever not true? Like yeah, does someone read cliffs notes and go, "that was really edifying and I gleaned incredible insights into myself and the world!!!".

    • socalgal2 21 hours ago
      depends on the book? I've read lots of books where it turned out the author had effectively one idea, it should have been 1 chapter, but they turned it into a book with 24 chapters of filler.
      • jraby3 17 hours ago
        I've thought about this a lot. But my conclusion is that many times the base idea is valuable but the author spends time using examples and different use cases to show you the effectiveness of the idea and a wider range of uses.

        Of course this isn't always true but it's true quite often.

        Take one random example - Spark: The Revolutionary Science of Exercise and the Brain.

        The idea is in the title. You don't need to read more than that to benefit from the idea. But all the different varieties of benefit and pathways and studies the author sites are still valuable.

        • sidrag22 15 hours ago
          another example

          Range: Why Generalists Triumph in a Specialized World

          idea is also in the title,and it displays so many different scenarios of people engaging with specialized fields and interacting with them in ways that relate to their past experiences.

      • throw4847285 11 hours ago
        So you've read a lot of bad books.

        Productivity hacks and pop psychology are not what we're talking about here. We're talking about interesting works of non-fiction. And if it's fiction, and you think that there is "one idea" and you can skip the rest, I don't know what to tell you.

    • JumpCrisscross 20 hours ago
      > when has this been ever not true?

      We had a literary explosion in the last few decades where the competitive advantage of reading may have reached its nadir. (The supply side also screwed the pooch. Recent non-fiction has been polluted with fluff. Literature, on the other hand, is in a renaissance.)

      In the last two years, on the other hand, I’ve found significant advantage in being able to speak, write and read clearly. The only thing I can think of is people marking themselves through LLM use, directly and indirectly.

    • dkdcio 23 hours ago
      this is my constant take with “AI”…if you were lazy before, you’re lazy now. if you were producing slop before, you’re producing slop now

      I think this just widens the gap between people who give a shit and those who don’t

      the big thing that changes are the economics of laziness and slop

  • aucisson_masque 3 hours ago
    > The more I think about it, the happier I am that AI is transforming the world of writing. In a way, I think it’ll make it even easier to stand out—because the more people take shortcuts, the less quality will remain for readers to flock to, even if the overall quantity of options is much larger

    I’m afraid people will just start getting used to ai written book and go on with it. Just like what happen on YouTube.

    It’s only going to decrease writer possibilities because their earning will decrease.

  • phplovesong 17 hours ago
    The webs downfall started with AI. Soon everything will be AI generated, from text posts, code that is shared with "look what i made, its cOoL", videos, podcasts etc. The himan touch will be gone, and new models are then trained on AI generated content, making the feedback loop worse and worse.

    It is time for a new web. A new standard, a new everything. A new start without the AI bloat. Either something like this will emerge, or we will loose the web we have.

    • sanderjd 9 hours ago
      I've been reading about the web's downfall for way longer than generative AI has existed.
  • mzajc 22 hours ago
    > In a way, I think it’ll make it even easier to stand out—because the more people take shortcuts, the less quality will remain for readers to flock to, even if the overall quantity of options is much larger.

    I really want this to be the case, but what I've observed so far is that slop networks with thousands of domains and millions generated articles simply drown out everything else. It's becoming increasingly difficult to tell apart pages written by humans from those written by conmen, especially if I'm not an expert on the subject matter.

    As an incredibly egregious example, here's one of the top results (#1/#2 on duckduckgo) for "wireguard mesh": https://www.ltwireworks.com/blog/how-to-configure-wireguard-.... Yes, it's a grill mesh manufacturer.

  • jolhoeft 22 hours ago
    A cautionary warning about AI I'm starting to use is, "make sure you aren't taking a forklift to the gym". Getting heavy objects of the ground is vastly easier with mechanical assistance, but doing so completely misses the point of lifting weights.

    A work related example I have is using AI to generate project plans. LLMs can probably generate an ok project plan for straightforward projects with plenty of examples to be trained on. But perhaps the most important value of generating a plan is the thinking that goes into it. Considering alternatives, likely failures, unlikely failures, etc. In generating the plan you are starting to practice dealing with problem that would come up while implementing it. The knowledge in your head is more valuable than the document produced. The document is just a summary of all the thinking you have done. Essentially a collection of mnemonics. Many details in your head will never make it into the formal plan, but will be needed during implementation.

    • nunez 18 hours ago
      A more accurate metaphor comparing the gym to LLMs IMO is using cable machines in place of old-school barbell and dumbbell work.

      You can --- and people have --- built strength-focused programming around cable machines. "They're safer and work target muscle groups more efficiently" is usually the argument. A Life Fitness Synergy system is also much more practical to own inside of one's house than a power rack and 1000+ lb of plates that will make quick work of most home flooring.

      This strategy works. It's sure as shit better than doing nothing. But quadriceps, delts and lats don't work in isolation. They rely on secondary and tertiary muscles and entire kinetic chains to help them accomplish tasks.

      Cables do hit muscle groups directly, but they also lead to diminishing strength and physique returns much more quickly than boring traditional weight training. They also lead to problematic muscle imbalances that, ironically, can cause overuse injuries later in life (super heavy leg extensions with improper knee flexion comes to mind).

      • tpdly 12 hours ago
        Just because LLMs are a technological innovation for "going to the gym" does not make cable machines a good metaphor. Maybe cable machines with cables made of highly variable grade hemp are comparable to LLMs-- they'll break randomly, and cause unexpected friction here and there. A cable machine still involves a human doing a thing. A forklift at the gym does the work instead.

        All this fluff about targeting specific muscles etc is simply not analogous to LLMS. Maybe old-school barbells are paper files and fax machines, and cable machines are Slack, Asana, and Excel?

  • ronbenton 23 hours ago
    >The more I think about it, the happier I am that AI is transforming the world of writing. In a way, I think it’ll make it even easier to stand out—because the more people take shortcuts, the less quality will remain for readers to flock to, even if the overall quantity of options is much larger.

    I had some musings about this with respect to blogging. Especially because search engines are now placing their own summaries above SEO-optimized junk posts. Those posts become disincentivized. Hopefully, it leaves us with more people writing blogs for the sake of writing rather than trying to sell clicks.

    • namrog84 19 hours ago
      The downside of this is what happens in other places. You won't neccesarily stand out.

      The entities doing endless reposts are building faster bigger audiences and might or will repost your stand out piece.

      If you care that you wrote it and that people enjoy it. All is good. But I'd you wanted to stand out with it or build around it. The low effort reposters might take that from you.

      In a rare occasion I've even seen a reposter shorten or better edit the original piece.

      Though I am still hopefully optimistic that in the long run for the good

  • NooneAtAll3 19 hours ago
    these type of posts really do have "professional painter says photocamera has no use for him" vibe

    casual painting also "makes you remember how to see" and stuff - that doesn't mean that taking photos stop you. It's just different

    • tpdly 12 hours ago
      Bad analogy. More like, "Professional painter says he doesn't employ low wage contractors to paint for him"

      If your rebuttal is "Michelangelo would've only painted the broad strokes and the faces" you're still missing the point that he still /did some painting/.

      • NooneAtAll3 6 hours ago
        where in Ai use did you find low wage contractors?

        both Photography and Ai are literally "click a (shutter) button" - so photo analogy is perfect

        And Michelangelo is bad example because it's "ye old paintings" (you could've at least tried with Picasso or smth) - while my argument would be "painters got replaced by photographers"

  • maxfromua 17 hours ago
    I feel like part of this post is a bit of hypocrisy.

    > This is why reading actual books in full might now be more valuable than it ever has been: Only if you’ve seen every word will you discover insights and links an AI would never include in its average-driven summary.

    Is summarizing by a human much different? Let's check if the author has a consistent stance on reading every word.

    https://nik.art/books/

    > The 4 Minute Millionaire: 44 Lessons to Rethink Money, Invest Wisely, and Grow Wealthy in 4 Minutes a Day > This book compiles 44 lessons from some 20 of history’s best books about money, finance, and investing. Each lesson can be read in about 4 minutes and comes with a short action item.

    Hmmmm

    • i7l 11 hours ago
      > Is summarizing by a human much different?

      One thing I have noticed and drives me up the wall with AI-generated summaries is that they don't provide decent summaries most of the time. They are summaries of an actual summary.

      For instance: "This document describes a six-step plan to deploy microservices to any cloud using the same user code, leading to various new trade-offs."

      OK, so what are these six steps and what are the trade-offs? That would be the real summary I want, not the blurb.

      The point of a summary is to tell me what the most important ideas are, not make me read the damn document. This also happens with AI summaries of meetings: "The team had a discussion on the benefits of adopting a new technology." OK, so what, if any, were the conclusions?

      Unfortunately, LLMs have learned to summarize from bad examples, but a human can and ought to be able to provide a better one.

    • b3kart 13 hours ago
      Not necessarily: assuming I've been following Nik for a while, I have reasons to trust his summary more than an LLMs summary. I would understand Nik's biases, and understand why he would focus on one thing over another. Nik would have a reputational incentive to do a good job and not completely misrepresent the book. I would also value Nik's personal, subjective view on the material, having an understanding of his background, and, again, his biases. On the other hand, I would have no idea what an LLM would focus on when summarizing, I would have no reason to trust it (LLMs fail in unpredictable ways), and an LLMs "opinion" is some average over the internet's + annotator's opinions.
    • benrutter 15 hours ago
      Not sure that's fair- claiming you prefer reading texts in full to summaries doesn't seem the same as saying you don't ever want to read a summary in any context?

      Aside from that, it seems more valuable to think about the odeas in the blog on their own merit, rather than attacking the writer for not having been true to those ideas in every past action.

  • zdragnar 1 day ago
    > The path behind easy only leads to the lowest common denominator. The real artists, fighters, makers—they stick with a truth as old as time itself: The suck is why we’re here, and only those who overcome it themselves will reap all the rewards of their hard labor.

    The other day, my wife needed to divide something, and rather than get up and walk to the next room to grab her phone, she did it on pen and paper longhand.

    At first I was amazed that she bothered instead of grabbing her phone to do it.

    Then it occurred to me that, while more people than I expect probably remember how to divide by hand correctly, I don't think I've actually seen someone do it in years, perhaps since my school days.

    I do agree with the author that art is a human endeavor and mastery requires practice... But I'm less optimistic that mass adoption of the easy way will let masters stand out. More likely, they'll just be buried under the deluge of slop the public craves.

  • komali2 22 hours ago
    I strongly recommend "How to Take Smart Notes" by Sonle Ahrens, it gets into how important writing is as a part of the process of thinking and learning.
    • snoman 18 hours ago
      Maybe you’re the person I’ve been looking for but I’ve yet to find someone that actually maintains a zettelkasten that isn’t a researcher/author and doesn’t come to the conclusion that it’s a huge waste of time and energy.
      • 63stack 13 hours ago
        I just had the same thought. I searched for the book mentioned, and as soon as zettelkasten popped up I lost interest. I've read about zettelkasten so many times, but I can't get myself to actually try it, it seems like doing organization for the sake of declaring "I'm very organized", regardless whether it's efficient or not.
        • komali2 8 hours ago
          I had this same issue with my original zettelkasten system in org-roam. Also it had become just, a locally indexed wikipedia.

          This book is probably not the greatest resource for establishing a zettelkasten, but it is very good at demonstrating how a good externalized writing system is critical for getting good learning done and finding unique insights. Also, it addresses the wikipedia issue I had specifically. As another person mentioned in this thread, making the notes more like proper publishable writing (even if just a sentence) made a big difference.

      • tpdly 11 hours ago
        I've found it somewhat valuable in two ways and unhelpful/misleading in another: 1. Making small notes is so intuitive and low-pressure. I was already essentially doing before but in the form of various lists of "ideas" or "thoughts on _blank_". You can't reliably decide where you would've put something, it becomes a mess. The fact its a single directory of .md's with a phrasal titles is a great organizing constraint. 2. Being able to find old thoughts/ideas easily and link them together lead to the clarification of a lot of my more unique ideas because of the ad hoc link-language that emerged. The big problems are the rabbit hole of manic articles promising too much, and the fact that after a while you simply have too many half-baked two-year-old notes that the whole thing becomes limiting and your declare note bankruptcy.
      • komali2 8 hours ago
        So, first, most would say the purpose of a zettelkasten is to write. The book goes into this, that your notes eventually just get incorporated into manuscripts, and that your notes should be written as well as if you were writing a manuscript itself.

        However, what really clicked with me about the book was the hypothesis that true human thinking can only be done externally, through writing, due to the limitations of our brain as a platform. The book lists out things like recency bias and short term memory limitations that get in the way of proper, structural thinking that results in actual insights. Whereas maintaining a zettelkasten, or a simulacrum of one at least, externalizes your thought process and allows you to achieve genuinely your maximum potential for thought.

        The arguments went beyond the normal ones about the recorded benefits of note-taking for learning, memory, and creativity, and got into the aspects unique to a zettelkasten that make it an enabler for thinking. However the book also pitches this as a productivity boost for authors and researchers, and doesn't really seem to care about people who are just learning for the sake of learning (but it does make a solid case that building a zettelkasten makes learning more fun).

        Personally I've been reading criticisms of the book as a way to learn how to maintain a zettelkasten that I agree with: it's not specific or clear enough, and it defines too many different kinds of notes (and not all at once; some note types are defined like 3/4 of the way through the book). For me it was just a very convincing argument to stop trying to make my brain do things it isn't good at - stop beating myself up trying to memorize super detailed facts, let my external system handle that. Stop worrying about forgetting bits and bobs of the various books I've read, let my external system slowly create a map of ideas of everything I'm reading. Stop over-optimizing all my note taking systems and just scratch shit into a paper pad, to be indexed as a good zettel later (or just thrown away if I decide it's not helpful).

        So, though I do intend to use this system to fuel my blog, I think I'd still find value in it just in feeding the conversations I have as well. I'm deeply interested in non traditional politics, leadership, and activism, and with this system I've adopted I'm finding myself make connections I don't think I'd have made before; for example this very idea of externalization and scaffolding of human thought as a means to make up for our flaws, I'm finding similar threads in all sorts of things I read now.

        If you're interested in zettelkasten, I would recommend a different resource for learning how to actually set one up (just, the internet plus chatgpt is probably fine, plus some FOSS software). I will say, if it's taking too long, whatever you're doing is too complicated. It should take a single click or button press to make a new note, and it should be very easy to scan through your notes and make links every once and a while, and making a link should be no more than a highlight, a button click or press, a search, and a confirmation. If you're anything like me, you may spend more time setting something like this up and agonizing over it than you will using it... that's why I moved from org-roam to trilium, so I could just stop hyper optimizing and start using the damn thing.

  • nemo1618 21 hours ago
    This is conflating two things: The stuck, and the suck.

    As the author says, the time you spend stuck is the time you're actually thinking. The friction is where the work happens.

    But being stuck doesn't have to suck. It does suck, most of the time, for most people; but most people have also experienced flow, where you are still thinking hard, but in a way that does not suck.

    Current psychotechnology for reducing or removing the suck is very limited. The best you can do is like... meditate a lot. Or take stimulants, maybe. I am optimistic that within the next few decades we will develop much more sophisticated means of un-suckifying these experiences, so that we can dispense with cope like "it's supposed to be unpleasant" once and for all.

  • skybrian 1 day ago
    > “Having AI summarize a book or a paper for me is a disaster. It has no idea what I really wanted to know. It would not have made the connections I would have made.”

    I don't disagree, but on the other hand, searches are not useless. They're limited because you do need to create a query capturing what it is that you're looking for, in advance. But we do that all the time.

  • ar_turnbull 21 hours ago
    The point about recognizing what’s valuable and making sure you don’t outsource that resonates.

    The other day I was on LinkedIn and a Chief Design Officer at a notable company posted her reflections on leadership for the year. There were some potentially interesting insights, but they never got past a surface level. The AI-ness of the writing was as clear as day (and GPTZero tagged it as 100% likely to be AI).

    It’s disappointing when you see leaders and so-called stewards of taste farming out that part of their voice.

    • fabianholzer 15 hours ago
      > It’s disappointing when you see leaders and so-called stewards of taste farming out that part of their voice.

      The bland platitudes of corporate management were mindbogglingly boring drivel already in the era before LLMs went mainstream. What we get nowadays is just more of it. That stuff should be skipped anyways. What was not worth writing is not worth reading.

  • jphorism 23 hours ago
    I wonder if the premium on consistent writing quality is different than the premium we place on consistent novelty.

    I'd hazard a guess that from the writer's perspective, novelty scales with volume of thought / connections, which is (at present) a fragmented process and not that well-assisted by AI. OTOH, can "writing quality" be better approximated by LLMs?

  • djaouen 5 hours ago
    > The suck is why we’re here, and only those who overcome it themselves will reap all the rewards of their hard labor.

    Thinking/writing isn't "the suck".

    > because the more people take shortcuts, the less quality will remain for readers to flock to, even if the overall quantity of options is much larger.

    The creators of The Enhanced Games/Olympics would disagree with you.

    Which brings me to my point: Are we satisfied being "Top Slave" or do we want to be Free? Or do you believe that Freedom is an illusion?

  • BoredPositron 1 day ago
    You can't prompt for taste yet and bad taste shines through every medium.
  • Quarrelsome 1 day ago
    reminds me of descartes. I was mentioning him the other day to my schizo mother in law, who hears voices, to offer the comfort of "I think, therefore I am". The idea being that during her worst episodes she might latch onto the thought that she is thinking, so while the voices _are_ scary; she still exists, she's still in control, she is thinking, therefore she is.

    Anyway, I casually mentioned he did a lot of his thinking in an oven and her curiousity was really piqued by that idea. Which is funny because every time I mention it to someone, that's the bit that is most interesting to them. I'm not convinced that an AI would necessarily pick up on that detail being of note as much as a human would.

    • egypturnash 22 hours ago
      Sadly some searching suggests that this fascinatingly eccentric image of Descartes sticking his head in an oven for a really good think is a mistranslation of "Descartes did a lot of his thinking in a room warmed by an oven".

      Best of luck to your mother-in-law in finding a way to deal with her voices, though. <3

    • bschmidt5545541 5 hours ago
      [dead]
  • jeffrallen 16 hours ago
    I work in IT ops for a cloud provider. When the going gets tough, I remind my colleagues that the harder it is to operate these servers with no customer visible downtime, the happier they are to pay us for it.

    The suck is why we're here.

    Your suck is my profit margin.

  • arkaic 1 day ago
    What happens when the AI perfects the art of writing? From one turing test goalpost to the next, fooling the human utterly each time until it's forever. Is there a ceiling to this?
  • rballpug 20 hours ago
    A three-fold normative proposition.
  • cryptica 20 hours ago
    > The more I think about it, the happier I am that AI is transforming the world of writing. In a way, I think it’ll make it even easier to stand out

    I think this may be a form of denial. The reality is likely the opposite: AI will commoditize the act of writing entirely, shifting the value solely to insight.

    For too long, we’ve confused "good writing" with "good thinking." We assumed that if someone wrote beautifully, they had something smart to say. Conversely, we ignored brilliant people simply because they couldn't articulate their complex ideas effectively.

    AI fixes this market inefficiency. It allows experts who are too busy actually doing things to finally compete with professional writers. They provide the raw brilliance (the substance), and the AI provides the polish (the form).

    • wtetzner 17 hours ago
      > Conversely, we ignored brilliant people simply because they couldn't articulate their complex ideas effectively.

      I don't see how AI helps here. If you can't articulate your idea, then

      1) how clear is that idea in your head anyway

      2) how are you going to articulate it to the LLM?

      • cryptica 1 hour ago
        You're projecting your own thinking style onto others; you're incorrectly assuming that because your thinking maps neatly onto language that everyone else works like that. No so.

        For example, I'm bilingual and I tend to think in visual and abstract concepts and then translate to the target language as a separate step. It doesn't necessarily come out exactly right the first time. I often re-read what I wrote and see ambiguities which could cause someone to misinterpret what I'm trying to say.

        Also, I tend to over-elaborate and struggle to understand other people's mental models. You need to understand your audience really well in order to convey points effectively or else you might bore them or your ideas might seem to go off on a tangent while you're actually trying to lay the foundation for the idea you're trying to convey...

        For example, as an experiment, I posted my previous comment 2 times; once handwritten, the other transformed by Gemini (the one you responded to). The transformed one did better and got more engagement... It said the same thing but punchier and shorter. It doesn't waste words laying the groundwork because it has a better sense of what you already know (as the audience) given the conversation context.

        This comment here is handwritten. I suspect it's probably not as punchy or to-the-point from your perspective.

        So to summarize; I think LLMs can help some people more than others and it fits with the point I was trying to make that it will empower more people to write who would previously not write.

    • b3kart 13 hours ago
      > Conversely, we ignored brilliant people simply because they couldn't articulate their complex ideas effectively.

      If you can't articulate your complex idea to a human, what's the reason to believe an LLM would understand it better?

    • ares623 19 hours ago
      I think you’re missing the other half to that conclusion.
    • phanimahesh 18 hours ago
      Now people can articulate bullshit better with LLMs.
      • cryptica 1 hour ago
        Yet my comment which you responded to, which was Gemini-generated, got more responses and engagement than my handwritten one!

        Yes, I actually did this as an experiment.

        From my perspective they are both different ways to communicate the same idea (with different effectiveness, different level of detail, to different audiences). I don't regard my Gemini-generated one as being any less 'my own work' as the one I painstakingly wrote by hand.

        It gets to the core of what writing should be about. It should be about substance, not form. LLMs are equalizers when it comes to form. Time to focus on substance now!

  • tills13 20 hours ago
    If someone, without my permission, used my content to create a replacement of me, for me -- however shitty -- I would probably commit a crime against that person. How are people so brazen? Or rude? Or stupid? Or psychopathic?
  • cryptica 20 hours ago
    > The more I think about it, the happier I am that AI is transforming the world of writing. In a way, I think it’ll make it even easier to stand out

    I totally disagree with this point. It's a combination of wishful thinking and denial. LLMs do a very fine job at writing if you give them the right base of information/insights. I think it will totally obliterate 'writing' as a differentiable skill.

    What will happen IMO is that people who have interesting ideas and experiences but suck at writing will have the upper hand. The market for content will be flooded by articles from people who would normally not write. They will feed the LLMs bullet points of interesting facts and observations and let the LLM fill in the gaps and actually make the article engaging. What matters is that the core points have to be interesting. The AI cannot come up with brilliant insights but it can convey brilliant insights really well.

    I think even if, hypothetically, some people could tell apart AI-generated content from manually written content, some AI-generated content may actually be more interesting and valuable to read than the manually written one...

    At the end of the day, writing by itself doesn't matter; it's just a communication medium. What matters are insights, ideas, concepts, perspectives... It was always about substance, not form. It's a flaw of the human mind that some people used form as a proxy for substance.

    There are a lot of people who know a lot and have a lot to say but they were so busy experiencing and learning that they never had time to write before... And even if they did, they could not convey their ideas effectively before.

    Now given that LLMs have mastered the superficial aspects of communication, those aspects are no longer valuable and substance is more valuable. But IMO nobody will care whether articles or books were written by AI in the future. It won't have much effect on quality or value of the book/article.

    I think what will matter in the future are:

    - Insights, ideas, perspectives.

    - Media (the most important still); who intermediates content distribution gets to decide what people consume and can shape their perception of quality to a significant extent.

    I'm hoping that as more people get involved in writing using LLMs, that it will force more people to confront the second point... People will be forced to pay more attention to substance as it will be the only real differentiator. I'm hoping people will begin to feel disgusted by the low level of substance that current media platforms purvey... It's already kind of happening; people invented the term "AI slop" but really it's not just AI which produces slop. The media has been guilty of spreading slop for quite some time and it kept getting worse. Now AI is just a convenient strawman to bash.

  • bschmidt25004 4 hours ago
    [dead]
  • readthenotes1 1 day ago
    "If that was the point, I’d have switched to AI long ago already. "

    Surely, an AI generated text would have been pedantically correct and used the subjunctive mood there, "If that were..."

  • thundergolfer 1 day ago
    > The more I think about it, the happier I am that AI is transforming the world of writing. In a way, I think it’ll make it even easier to stand out—because the more people take shortcuts, the less quality will remain for readers to flock to, even if the overall quantity of options is much larger.

    It's easier than ever to be a p99.999% oil painter, but compared with p99.999% film directors basically no one cares at all. Because painting is not in high demand, and film still is, for now.

    If the demand for your certain kind of writing vastly diminishes, it is your detriment. AI's supply effect is changing the demand.

    George Eliot already wrote a p99.99999% novel, Middlemarch. It is only thanks to massive population growth that the number of readers of her novel has increased or remained steady. As a proportion of the population, Middlemarch has no readership, and is a side show of a side show. It has almost completely lost its once hallowed place in society and culture.