Nitpicking a Dumb Machine, the Imaginative Coding of ChatGPT.
There’s been a buzz about ChatGPT on the tech web recently, with some Tech Bro’s even claiming to be outsourcing their coding to it.
With any new tool, it’s interesting to try a corner case and a real-world problem. New technology doesn’t always work well for me, and while Jason Turner has been able to use it to program his Commodore 64, I couldn’t get a valid output from the thing. However, rather than just looking at working results, the boundary conditions where errors emerge can give a good understanding of the capabilities.
I’ve been playing around with embedded Rust, so I started with an issue I was trying to understand.
“pass an immediate value to a rust asm! macro.
use asm::immediate;
fn example() {
let value = 0b1010;
asm!("mv a0, $0" : : "r"(immediate(value)));
}
This gave a plausible-looking example using asm!
, but it’s completely wrong. It is using an old syntax (nowllvm_asm!)
incorrectly with a non-existent immediate
macro trying to do something impossible.
“pass an immediate value with the new rust asm! macro.”
fn example() {
let value = 0b1010;
asm!(“mv a0, {}” : : “r”(value) : “a0” : “volatile”, “alignstack”);
}
It started with the new syntax but reverted to the old syntax part way through the expression, and forgot about the immediate value in the example. I would expect the immediate to be a constant item, so let’s try again.
“how can i pass a const item to a rust macro”
use const_fn::constify;
const VALUE: i32 = 0b1010;
macro_rules! set_register {
($value:expr) => {
asm!(“mv a0, $0” : : “r”($value) : “a0” : “volatile” : “alignstack”);
}
}
fn example() {
set_register!(confify!(VALUE));
}
It’s suggested I use the const_fn::constify!
macro. The issue is I can't find any evidence of this macro existing inconst_fn
.
The purpose here is not to nitpick a dumb machine that isn’t really designed to do anything like this. Rather, it’s to highlight the verbalist nature of the machine — as Nassim Nicholas Taleb tweeted:
While it can’t actually solve a problem like an engineer operating from first principles, it can make a great attempt at extrapolating a bogus solution and plausible explanation. I guess that is the “natural” interaction feature. When there is enough chatter in the training data about real problems and solutions, it must be able to make some good inferences.
The full chat log below looks at the answers in detail. It shows a mix of outcomes — absurd to correct.
(The red underlined parts are made-up rubbish — or the plain impossible, green for what looks like a reasonably correct starting point and orange for obsolete and/or not quite right. )
It’s actually amazing that ChatGPT can drill down to the correct technology area and infer plausible-seeming if non-existent solutions. The errors above, if anything, suggest it may be good for replacing boilerplate-dominated cargo cult coding in a way that programming language evolution and adoption has not. Paul Graham summed that up best:
What would really add value is a link back to the evidence to support the inferences, ideally the most information-rich sources that determined the content could be identified. That would replace Google’s search in an instant.
Originally published at https://www.shincbm.com on December 12, 2022.