Duly upvoted, but I'd like to note that the problem with so-called “unsafe languages” isn't undefined behavior per se, but rather the lack of appropriate tools (e.g. a formal axiomatic semantics) to reliably write programs with no undefined behavior.
As long as you use the new semantics, the unsafe parts of the language are barred to you, so it's like a different language.
I'm not talking about changing languages, but rather about giving existing languages more rigorous specifications. My beef with C, C++, unsafe Rust, etc. isn't the presence of undefined behavior in arbitrary programs (for which the solution is very simple: don't write arbitrary programs!), but rather the absence of a calculational method [0] for designing programs that don't have undefined behavior (for which a formal semantics is an indispensable prerequisite). To convince yourself that a C program doesn't have undefined behavior, you need to interpret the English text of the C standard, and that's what I object to.
[0] The calculations needn't be performed by a machine. If they are performed by a machine, you have a safe language. If they must be performed by humans, you have an unsafe language, which is nevertheless a lot more usable than C and friends.
A major difficulty here is that undefined behaviour is a property of a particular execution of a C program, rather than a static property of the program itself. Tools that dynamically detect UB are useful, but will not demonstrate that there are no inputs for which a program will go wrong.