• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle















  • The LLM isn’t the issue here. It’s generating coherent speech well enough.

    The problem is that there is no mechanism for identifying odd or out of place items in the stimuli fed to the model. This mechanism (separate from the LLM) would be placed between the CNN (image recognizer) and the LLM (text generator). What typically happens is the CNN recognizes subjects and items in an image and passes the list along to the LLM which generates a description. Since the LLM doesn’t actually have access to the original image, you can’t ask it to look for unusual things the CNN did not provide it.

    The result is not surprising. People just don’t know how these models work and so assume they can do anything.



  • More like a notes/personal wiki app, than a readme editor.

    That said, Obsidian is a diamond in the rough. Building a personal wiki while learning a skill and referencing it later (via search or category) is a true life hack. It feels like augmenting your memory capacity.

    Truly invaluable if you need to reference things often but your knowledge base is highly specialized (e.g., I’m a neurology professor)