[20:45:49] Here is how I understand the “relationship” between Abstract Wikipedia and large language models. Large language models work by assembling their response in an abstract syntax format similar to a compiler, and then an algorithm converts the data into sentences and paragraphs cognizable to humans. What Abstract Wikipedia and Wikifunctions do is put that technology in the hands [20:45:50] of humans, so instead of a machine process producing that kind of response, it’s human effort. If my understanding is correct this could be an extremely valuable project in the near future. [21:26:07] Partially. I'm not sure if the description of LLMs is correct: that they first assemble an abstract syntax first. The rest sounds right. [21:26:23] Afaic, "large language models" don't have an explicit abstract syntax or conversion; they "guess" responses based on a massive statistical model which was adjusted based on large amounts of training data. [21:26:25] Abstract WP does not use a statistical approach, but one that models language and knowledge in explicit algorithm-and-data like structures that can be adjusted and checked by experts. This is not that easy with a statistical model which (in the complexity needed for generating written sentences) is pretty much a black box, "edited" only by "retraining" it. [21:27:07] Yep