• OpenAI GPT-2 modified by Mr_KrzYch00 for an innovative website front-end support.
  • The AI tries to predict the continuation of the text that has been typed in the INPUT box
  • Generated samples are browser session locked and visible only for that session
  • 1558M model generates 1 sample at a time with 10-token self-input-accumulating micro-samples for a "maximum 1024 tokens" sliding-window-like effect
  • Micro-batches undergo post-processed, allowing more control over the re-feeding micro-batch to input mechanism for outputs longer than 20 tokens
  • All available AI parameters are adjustable: Temperature, TOP_K and TOP_P, with TOP_P having an additional multi-range support
  • The <|endoftext|> token is properly supported in the INPUT box and will be correctly passed to and understood by the AI
  • The amount of tokens detected in the INPUT box is displayed exactly as the AI sees them (functionality integrated within the front-end)
  • Simple tooltip functionality is provided for clarification about each of the functions this website provides
  • Virtual windows on this site can be repositioned and resized freely within the browser window
  • Powered by Nvidia GTX 3060 12GB (barely fits; sampling ~10T/4.5s due to inability to utilize all CUDA cores [auto-regressive decoding])
  • Old projects and stuff here


    Server status:

  • ▲ Advanced Parameters ▲
    | |

    TOP_P List (0 ~ 1) |

    | | |


    Presets: |
    Micro-outputs post-filtering:

    ,
    ▲ Input TEXT ▲


    ▲ Output SAMPLES ▲

    This is a research site, do not abuse!
    The administrator is not responsible for any input user writes as well as the output the algorithm produces.
    This site uses cookies to identify your samples and make them visible only to you.
    [click to hide]