Jump to content

legend

Members
  • Posts

    30,129
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by legend

  1. I can skip about one day and then it's unbearable, so I'm definitely not participating.
  2. I don't really like set items anyway. Diablo is cool because you can make custom builds and the very concept of sets is antithetical to that.
  3. What if instead of cutting people who help build a new era for us, we empowered them to do even more? Naaa, better trim the fat.
  4. FWIW, there is actually some interesting new function approximation theory because of DL's success at generalization that's upending the simpler bias-variance trade off theory (like VC dimension and Rademacher complexity). The high-level view is low-overparameterization incurs overfitting, consistent with issues like those explored in VC-dimension analysis. But high overparameterization actually becomes much better at generalization because it leads to more robust interpolation models in a latent space. High overparameterization also makes local methods like SGD be less likely to get stuck in poor local optima which may be overly sensitive to overfitting. But these interesting new findings about function approximation theory don't really change what kind of model ChatGPT is and the inherent limitations of that kind of model
  5. Hmmm. Probably not? I can check my old computer later and see if it's still sitting somewhere. Not sure if it would still work either though. I'll let you know!
  6. The model *is* a word (well, token, which is two character long if memory serves) predictor. It takes as input the last set of tokens entered and generates a probability distribution over the next token that would follow. ChatGPT has an additional fine tuning step in which the probabilities of token outputs are adjusted by human preferences for its different responses. But the very nature of the model is token prediction. it starts with the prompt text, generates a probability distribution of the next token, samples a token, and then continues one at a time until it reaches a stop token. There is no reasoning involved with its output either. Once a token is sampled it has to go with the flow of it. This is very much the same process as your keyboard predictor where you just keep clicking the suggested next token. It is, however, an extremely large model trained on an absurd amount of data and that has made it good at adapting context quite coherently. And when it comes to talking to it about well versed topics on the internet already, it can do a pretty good job! So, it can be a useful tool, but from an AI perspective it misses the mark in numerous ways. It's just a massive scale up of ancient simple ideas that fail to meet the intelligent agent goal. However, it's also an important step toward better AI systems, because it gives us good representations for language that have otherwise been elusive to generate. You can then ground those representations with other sources of information or senses, which opens the door for much better AI systems that we can interface with. The model architectures developed for the language modeling (the transformer architecture in particular) are also really useful architectures outside of language which is another win. But this kind of chat bot AI version of it is still just token prediction. If you want to know more about some of the limitations of token predictors and the concerns they bring, I would recommend looking into work by Timnit Gebru or Emily Bender. A good start is their Stochastic Parrots paper (in particular section 6): On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency DL.ACM.ORG
  7. Actually I can understand how people can get value out of hallucinated experiences. For AI, word hallucinations that are not tethered to anything is boring and just a magic trick, even if technically impressive. It misses the mark of what intelligent agents need to be. Grounding language models to other senses and concepts *is* more interesting though. Language conditioned image generators, for example, at least take a step in this direction (although only a small one) and there are many other interesting things that can be done with grounding language models than word predictive chat bots.
  8. I stand by my claim of more than 10 years that chat bot AI is the most boring form of AI! Large language models haven’t changed that opinion “Here’s a bunch of hallucinated nonsense”
  9. Enjoy! I love my deck so much. If you enjoy Switch, the Deck feels like the next gen version of it. And if you are an avid Steam player (which I believe you are), having that whole library brings it to another level.
  10. I'm aware. Yet I still seem to only get C's on the "just timing" and things just feel off. If I could tell you what's going wrong, I wouldn't have made the post that I did
  11. I could, but it feels like I'd be missing the thing that makes it stand out and would be left with something more generic. I want the rhythm combat to work, but for some reason I can't seem to sync with it. Maybe I'm just bad at rhythm when I'm also trying to fight and too many prior game biases cloud my timing of attacks. But if that's true, I wish there were better feedback about what's going wrong. They have a lot of tools to "show" you the timing of the beats, but they don't really do anything to tell you why they think you're off. So even if it is my fault for not doing it right (rather than input latency or some other issue in the game), it comes off feeling arbitrary and ultimately unsatisfying.
  12. I can't seem to find the rhythm in this game. I can play trombone champ and get S scores on tracks, but on this it just feels like it misses more than it should. I am playing this with a wireless controller so I dunno if it's input lag or something but it just feels wrong. Not that I'm the most musically inclined person by any stretch, but for some reason this doesn't feel like it syncs right and that kind of ruins the appeal of fighting to the beat.
  13. Yep, still runs like poopoo with raytracing. They really need to multithread the code. But if they didn't already do the substantial work to do that in the lead up to this patch, I don't see them doing it now.
  14. Pizza is just cooked dough with stuff on top. If you do it right you can put lots of different things on it. You pizza bigots.
  15. Hmm think it was there from the start. Pretty sure I got it when it came out and it was there. Well, at least since it's full release. (The game was in early access for a while so it might not be been in some of the pre-release versions).
  16. Have you actually looked at the accessibility options? You can tune it way down. Disabling the contact damage alone makes it drastically easier.
  17. Rogue has a shocking number of accessibilities options to make it easier without even taking away your achievements. So yeah, I recommend it!
  18. You might enjoy Rogue Legacy 2. When you reach a boss, you can lock the world if you lose and teleport right to it on your next life. Makes it far easier to learn the patterns.
  19. I mean, it's the right thing to do. But man, it is going to be difficult to keep the show going.
×
×
  • Create New...