![]() ![]() Wilson also notes that such “See you” farewells have long been the occasion of humorous elaborations such as “See you in church” (between non-churchgoers) and, as a joking response to “See you later,” “Not if I see you first.” “See you” was a common casual farewell in the US at least by the late 1890s, although it may be somewhat older. A question about the phrase was raised back in 2002 on the mailing list of the American Dialect Society, and ADS member Douglas Wilson did a bit of research and deduction to come up with what seems like a reasonable explanation of the origin of the phrase, which I will do my best to summarize here.Īs Wilson notes, “see you” is a common component in colloquial farewells (e.g., “See you around,” “See you later,” or simply “See you”), used even between people who have no expectation of seeing each other again (as, for example, between a customer and a store clerk). “See you in the funny papers” is a jocular farewell that dates, as far as anyone has been able to determine, to the early years of the 20th century. Both this section and the daily comics pages were known as “the funny pages,” “the funny papers” or “the funny sheet.” A few grumpy, snooty newspapers (e.g., The New York Times) never published funny pages, and to this day they have to pay people to read their newspaper on the internet. On Sundays, many newspapers even had a whole special section devoted to just comic strips, often printed in color. But since most people found all that news pretty depressing, the newspapers also had a section, usually near the back, where they printed cartoons and comic strips to cheer folks up so they would buy the paper again the next day. Paper was like that, and “newspapers” were printed every day to tell folks what was going on in the world. Once upon a time there were things called “newspapers,” which were printed on stuff called “paper.” Imagine if you could somehow take just the image on the screen of your computer (iPad, whatever) and fold it up and carry it around and read it anytime you wanted, without needing any batteries or wi-fi. I’m guessing the speaker is saying you’re a joke, but where did it come from? - Andrea. I remember hearing it when I was younger, too. ![]() I have many open-source models on HF that I trained myself and published.Dear Word Detective: What is the origin of the phrase “See you in the funny pages (papers)”? My girlfriend says her father used to tell her that whenever he left the house. I assume it is not rocket science by having such a large team. With all due respect, I wonder if anyone in the team has used the inference API and documentation end-to-end and saw the exorbitant price for what they offer. I had to take it down suddenly and self host it. I deployed a question answering model (deepset/Roberta) to a client in production using the inference API and it failed intermittently and affected production systems. #Newspaper funny pages plus#Plus the embeddings model cannot be pinned and you need to clone it and change the pipeline to be able to pin it and use it for embeddings.ģ. I had to exchange multiple emails to get a different API endpoint that was not mentioned anywhere to get embeddings. The documentation only showed an example of similar sentences and no reference to getting embedding which would be the prime use-case if anyone built semantic search systems. I tried to use sentence transformers for getting embeddings in another instance. Whereas HF inference of the same model on CPU (slow) costed me 750$ and that too my bootstrapped app doesn't have any crazy usage. #Newspaper funny pages full#I could get faster inference and have the full machine for myself so that there is no token-based charge. ![]() I self-hosted my model on GPU previously for 300$ for my app. ![]() And more so if you are using a simple text generation model like T5 that is not something like GPT-3 that would need insane infra to host. This is extremely expensive for any production use case. To put that in perspective it comes out to about 8k sentences.Īnd on GPU it is 50$ to operate on ~8k sentences. The CPU inference costs 10$ for 1M characters. I used CPU inference of a single model and got billed 750$. Here is my story using it and paying 750$ for CPU inference (slow) which could cost me 300$ for GPU inference (fast) or any other managed service like NLP cloud or inferrd.ġ. With all due respect to the open-source contributions, the inference API of Hugging Face is robbery in broad daylight. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |