Can ChatGPT Handle Infinite Possibilities?

視聴回数 19,004
0
0
Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB
Twitter: https://twitter.com/MLStreetTalk

In this video, Dr. Tim Scarfe and Dr. Walid Saba discus the wonders of Turing Proof and the distinction between recurrent neural networks and recursion. He explains that recursion is a computing model with clear semantics for stopping criteria and base cases, while recurrent neural networks simply involve a repetitive process without the same constraints. Dr. Saba also emphasizes the power of representing infinite objects with finite specifications, as seen in the practical infinity of compilers like Python, which must be ready for an infinite set of valid programs.

He then addresses the question of whether natural language models, like transformers, can approximate recursion to a level of complexity that covers most of the language for practical purposes. While these models have done a good job of approximating recursion so far, Dr. Saba cautions that the debate is not over and that more testing is needed when these models are used in real-world situations.

TOC
The magic of Turing Proof
Recurrent Neural Networks vs. Recursion
Recursion as a computing model
The power of representing infinite objects
Practical infinity in compilers
Approximating infinity in natural language


References:
1. Turing, A. (1936). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42(1), 230-265.
https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
2. Chomsky, N. (1956). Three models for the description of language. IRE Transactions on Information Theory, 2(3), 113-124.
https://ieeexplore.ieee.org/document/1056813
3. DeepMind (2023). Mapping the Chomsky Hierarchy to Different Neural Network Models. (ICLR 2023)
https://arxiv.org/pdf/2207.02098.pdf