• 1 Post
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle



  • SLS is on track to be more expensive when adjusted for inflation per moon mission than the Apollo program. It is wildly too expensive, and should be cancelled.

    This coupled with the fact that the rocket is incapable of sending a manned capsule to low earth orbit which is the the lunar gateway is planned to a Rectilinear Halo Orbit instead.

    Those working in the space industry know that SpaceX’s success is not because of Elon but instead Gwynne Shotwell. She is the President and CEO of SpaceX and responsible for all things SpaceX. The best outcome after the election is to remove Elon from the board and revoke his ownership of what is effectively a defense company for political interference in this election. Employees at SpaceX would be happy, the government would be happy, and the American people would be happy.


  • The technical definition of AI in academic settings is any system that can perform a task with relatively decent performance and do so on its own.

    The field of AI is absolutely massive and includes super basic algorithms like Dijsktra’s Algorithm for finding the shortest path in a graph or network, even though a 100% optimal solution is NP-Complete, and does not yet have a solution that is solveable in polynomial time. Instead, AI algorithms use programmed heuristics to approximate optimal solutions, but it’s entirely possible that the path generated is in fact not optimal, which is why your GPS doesn’t always give you the guaranteed shortest path.

    To help distinguish fields of research, we use extra qualifiers to narrow focus such as “classical AI” and “symbolic AI”. Even “Machine Learning” is too ambiguous, as it was originally a statistical process to finds trends in data or “statistical AI”. Ever used excel to find a line of best fit for a graph? That’s “machine learning”.

    Albeit, “statistical AI” does accurately encompass all the AI systems people commonly think about like “neural AI” and “generative AI”. But without getting into more specific qualifiers, “Deep Learning” and “Transformers” are probably the best way to narrow down what most people think of when they here AI today.




  • I am an LLM researcher at MIT, and hopefully this will help.

    As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

    The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

    This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

    This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

    Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

    This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

    From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

    —-

    +more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.



  • I am a pilot and this is NOT how autopilot works.

    There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.

    The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).

    An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.

    Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.




  • I hate that I am defending Israel when I say this because what is occurring in Gaza is tragic, but a lot of people are confusing “Genocide” for perceived “War Crimes” as defined by international law and also confusing “Hamas” for “Palestine” or the “Palestinian Authority”.

    Hamas is terrorist government (similar in nature to the Taliban) that receives a lot of external funding from countries that actively wish to see the death of Israel and all Jews, making Hamas the chief perpetrators of Genocide in this conflict despite how ineffective they have been in their goals.

    Israel was attacked by this terrorist government, and is now defending itself with the expressed war goal of destroying Hamas. While Israel has had a tenuous relationship with the Palestinian people (namely the government’s active efforts to limit the Palestinian Authority and drag their feet on grant the PA more autonomy and their own state which is deplorable and inexcusable), they do not and have not wished to kill an entire culture of people.

    Complicating matters, Hamas commonly employs warfare techniques that go against the Geneva Convention like placing government and military headquarters in basements of protected buildings like Hospitals and places of worship. The moment they do that, and abuse those international recognized sanctuaries, they become legitimate military targets leading to the tragic deaths of unwitting civilians.

    People can object to the war on the grounds that war is tragic and results in many civilian casualties, but to make meritless claims is detrimental to both international institutions and to the definition of a Genocide. South Africa calls what Israel is doing a genocide, but also explicitly looks the other way with Ukraine and continues to forge close ties with Putin? (For the record, Russia’s actions in Ukraine are also not considered genocide under it’s strict international definition, but they have been found guilty of war crimes).

    Israel has an internationally recognized right to defend itself, and it is doing that by dismantling Hamas through force. The Palestinian people are unfortunately caught in the crossfire. With that said, Israel’s methods to this end are not above criticism, and they have faced pressure from the US and Biden to limit civilian casualties wherever possible, and use ground forces to directly attack Hamas rather than relying on airstrikes that have resulted in many innocent deaths.

    For those reading who think all war is bad, I’ll leave you with this quote from John Stuart Mills:

    War is an ugly thing, but not the ugliest of things: the decayed and degraded state of moral and patriotic feeling which thinks that nothing is worth a war, is much worse. When a people are used as mere human instruments for firing cannon or thrusting bayonets, in the service and for the selfish purposes of a master, such war degrades a people. A war to protect other human beings against tyrannical injustice; a war to give victory to their own ideas of right and good, and which is their own war, carried on for an honest purpose by their free choice, — is often the means of their regeneration. A man who has nothing which he is willing to fight for, nothing which he cares more about than he does about his personal safety, is a miserable creature who has no chance of being free, unless made and kept so by the exertions of better men than himself. As long as justice and injustice have not terminated their ever-renewing fight for ascendancy in the affairs of mankind, human beings must be willing, when need is, to do battle for the one against the other.





  • This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it’s relatively simple to create your own model.

    The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf

    I implemented this paper back in March. It’s as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author’s created a new model architecture that can better control the output of a diffusion model.