Jump to content

legend

Members
  • Posts

    30,129
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by legend

  1. I feel like you should be well aware of the long civilization-length running facts about the dangers of propaganda and how people are susceptible to it. If you think acknowledging that danger is patronizing, I don't care. Closer, but that's till not a complete description of the situation. But you're changing the topic again with more whataboutism without conceding anything about the original topic. I fear you're going to circle back to making the same original argument all over again if we pursue it, like you've already done in this thread. So I'm going to end the discussion here.
  2. No, that's not the reason, that's just the cherry on top for people who *will* block him. The reason it's a problem is because huge swaths of the population won't block him and they are not equipped to understand the delusional propaganda for what it is. The problem is not that I, nor anyone on this board, has to see his annoying tweets. We can (mostly) fix that or even just tolerate it to some degree when he reverse it. The problem is he's broadcasting his tweets to a huge population that's going to be taken in by his nonsense and me blocking him won't change that. This is why I made the comment that you're missing the forest for the trees.
  3. And we're back to where we started about that not being a solution! Demut, you're all over the place on this. You keep popping between different stances and I don't see how we're going to address it here.
  4. So to be clear, you don't understand why a delusional egomaniac abusing twitter to force his shitty views on as many people as possible is a problem? Do you understand why Fox News is a problem? Do you understand why conspiracy theory rabbit hole recommendations on youtube is a problem? Do you understand why misinformation and propaganda is a problem in general?
  5. You seem to be changing the topic. This started by my telling you why telling people to block musk is not a solution to the problems inherent with what Musk is doing with Twitter.
  6. What do you want me to say more about then? Whether Musk is a delusional egomaniac? That he's abusing twitter to shove his views down everyone's throat?
  7. If you're not already aware of how bad the public is with social media and badly influenced by it, I don't think I'm going to successfully communicate to you on D1P all the ways it's a shit show with terrible outcomes. At least not within the time I'm willing to give it
  8. Well, I obviously think it we can succeed: the first thing I said is I think we can make human-comparable AI! But the current state pales in comparison to humans because there are giant open questions we haven't solved yet We can't just scale what we have; we have to develop wholly new approaches. There is much work to be done!
  9. No. Intelligence is not a single sliding scale. No amount of making chat gpt bigger or using more compute will change this fact that it has no agency. The limitation is entirely inherent to the kind of system construction that it is. AI has multiple different sub fields of study because they focus on different cognitive aspects (and sometimes approaches). If you want to build an agent you have to actually build mechanisms associated with agency.
  10. No, telling individuals to use a block will not solve the larger social problem he’s presenting. Have you not payed attention to how the wider public uses social media like, at all?
  11. Telling individuals they can use a block button against Musk (which he will also reverse) is a not a simple fix to the problem of a delusional ego maniac going out of his way to abuse one of the world's largest media platforms to peddle his shit ideas. You're missing the forests for the trees.
  12. Yes, there is plenty of AI tech with some degree of agency. My research area is in fact dedicated to developing decision-making agents (reinforcement learning)! ChatGPT/Bing, however, does not have any agency and even the best of the AI tech that does have some agency pales in comparison to a human in terms of capabilities.
  13. By agency, I mean able to make decisions to bring about goals/maximize objectives. Those objectives must be objectives that are directly in service to people, but they are objectives all the same. I have absolutely zero interest in building AI with "personal" objectives like people. If I wanted to make a person I'd do it the old fashioned way.
  14. The thing for me that's annoying about all the attention it's getting is I very much believe AI can be built to have agency and intelligence that compares with humans. And getting excited (either positively or negatively) by this garbage that is so far from what we should aspire to cheapens the dream and how interesting the real problem is. It's like trying to have a baby and someone gives you a doll instead.
  15. The thing to remember is you can't push it to do anything because its words are connecting to absolutely nothing. They mean nothing to it and it can't do anything but print words to you. It has no agency and no cohesive self.
  16. I think you're making a big assumption that they would sell substantially more steam decks if they priced their highest end model a little cheaper and that that increased number of sales would also lead to enough additional steam purchases to offset the cost. Steam Deck is amazing, but at this stage it's still going to appeal to a niche audience, so I'm skeptical that it being a little cheaper would have broadened that audience much. Valve probably has much better data to make this decision than we do.
  17. Haha sorry! I tried to keep it fairly understandable, but you probably will need to read the article I linked first or the context will be lost! That said, if you are curious about what I meant by anything there, I'm happy to answer any questions. (And if you're not, that's okay too )
  18. FWIW, I would say this New Yorker article on ChatGPT and LLMs floating around is more right than wrong: ChatGPT Is a Blurry JPEG of the Web | The New Yorker WWW.NEWYORKER.COM OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer? The lossy compression analogy is a good one, and it's one that's been regularly used by researchers when discussing neural nets in general. In fact, some of the theory behind that actively places the emphasis on why compression is useful (Google around for "information bottleneck neural nets," if you'd like some examples). There are a few threads where I think it's a bit misleading though: how lossy compression relates to generalization, new information incorporation, and how these systems can be used in the future. On compression and generalization, I think it's worth noting that the article does get some things right here. For example, I'm pleased to see that they point out that one way to compress facts about arithmetic is to encode the rules of arithmetic and then follow them to answer *any* question, not just ones that were seen before. This is an important concept that guides some work in AI. However, these models do *not* compress that way and there is good reason to be dubious that this kind of model architecture and training methodologies will have much hope in happily falling into that kind of compressions. Because of that, I think people may disregard the kind of compressions these systems do do as not very useful for intelligence. However, the kind of compression neural nets are likely to do is still quite useful in many domains! One critical aspect of biological intelligence is the ability to remember different facts and use that memory to inform how to behave/reason in future scenarios. The complication is no two moments in time are ever the same. Everything is always changing, and the number of things that change is *far* larger than you realize on casual inspection. Naïve ways of measuring the dissimilarity between two events also leads to bad results in AI. What biological intelligences are particularly good at is having "fuzzy" memories, where events are compressed into a useful representation from which different events that behave similarly are "close" together in their representation. With this capability, biological intelligences can learn new things *very* quickly simply by remembering similar events and reusing that stored memory in similar situations in the future. What the deep learning revolution that started around 2011 was really about was advancing neural nets and their training methodology enough that they can solve the problem of how to find useful compressed representations and store "fuzzy memories" of the network's training data such that new accurate predictions could be made from that memory of the training data. The claim that many tasks can be solved by these kinds of fuzzy memories is the manifold hypothesis. However, while fuzzy memories are a crucial aspect of biological intelligence, it's not the only aspect and not every cognitive task falls into this category. That is, the manifold hypothesis doesn't hold for every cognitive task. Consequently, this kind of compression is super important, but not a panacea. While neural nets are good in general at compressing datasets into queryable memories that can be levered to answer questions about new situations, creating those actionable memories is an incredibly slow process that requires enormous quantities of training data. What it lacked was the "fast memorization and reuse" that biological intelligences possess. Text generation as a problem space, however, has this interesting property that it has to operate on text sequences of undefined length. To build a neural net to solve this problem, you need to develop a network architecture that can handle this undefined growing length of input. Transformers/self attention models are the current solution to that architecture problem. However, in building a system that can handle this problem space, it's also produced a way to solve the problem of fast new information incorporation. When you prompt an LLM with text, that text will be encoded into useful representations and will be accessible to the model in future text generations. Consequently, in the prompt itself you can encode new information on which the network can operate. And experimentation with these models has shown that they can in fact immediately leverage this information! In your prompt you can define new words (or redefine existing ones) and the model will correctly use them in generated text! You can even encode various kinds of facts and the model will use those. As long as the text generation task conforms to the manifold hypothesis, you actually have a good shot of it correctly using that information. Making progress on the fast information acquisition is a really important result, and it's why the model architecture (transformers/self attention) are much more important than using it for chat bot settings. A recent finding I really like, for example, is using the transformer architecture in an agents system where an agent playing a new "game" with new rules quickly explores and learns the rules and then acts effectively from those rules. See the video of it here for an example: Finally, the last thing that I think this article misses is the utility of these language models. They kind of end with saying a compressed version of the web isn't very useful. And I agree, as is evidenced by my disinterest in chat bots that I've expressed here! But the future, IMO, isn't using this tech as a chat bot. It's in connecting it to other systems and percepts. Image generation from text is a great example. Or having an agent explain what it's "thinking." Or if we want to stick to text, coupling it with search, bringing the information into its knowledge space with prompting, and then having it give summaries or answer questions about that information. There's tons of potential in grounding and connecting these language models to other things that makes it way more powerful and interesting than just next token prediction chat bots, and I *am* excited by those. /rant
  19. Depending on the game, I may turn subtitles off and listen through it. But if it has a lot of kind of boring dialog, I'll leave them on and just read it and advance.
×
×
  • Create New...