Guessing the original comment hasn't taken complex analysis or has some other oriented view point into geometry that gives them satisfaction but these expressions are one of the most incredible and useful tools in all of mathematics (IMO). Hadn't seen another comment reinforcing this so thank you for dropping these.
Cauchy path integration feels like a cheat code once you fully imbibe it.
Got me through many problems that involves seemingly impossible to memorize identities and re-derivation of complex relations become essentially trivial
Complex exponentials and complex logarithms are useful in some symbolic computations, those involving formulae for derivatives or primitives, and this is indeed the only application where the use of e^x and natural logarithm is worthwhile.
However, whenever your symbolic computation produces a mathematical model that will be used for numeric computations, i.e. in a computer program, it is more efficient to replace all e^x exponentials and natural logarithms with 2^x exponentials and binary logarithms, instead of retaining the complex exponentials and logarithms and evaluating them directly.
At the same time, it is also preferable to replace the trigonometric functions of arguments measured in radians with trigonometric functions of arguments measured in cycles (i.e. functions of 2*Pi*x).
This replacement eliminates the computations needed for argument range reduction that otherwise have to be made at each function evaluation, wasting time and reducing the accuracy of the results.
Even when you use the exponential e^x and the hyperbolic logarithm a.k.a. natural logarithm (which are useful only in symbolic computations and are inferior for any numeric computation), you never need to know the value of "e". The value itself is not needed for anything. When evaluating e^x or the hyperbolic logarithm you need only ln 2 or its inverse, in order to reduce the argument of the functions to a range where a polynomial approximation can be used to compute the function.
Moreover, you can replace any use of e^x with the use of 2^x, which inserts ln(2) constants in various places, (but removes ln 2 from the evaluations of exponentials and logarithms, which results in a net gain).
If you use only 2^x, you must know that its derivative is ln(2) * 2^x, and knowing this is enough to get rid of "e" anywhere. Even in derivation formulae, in actual applications most of the multiplications with ln 2 can be absorbed in multiplications with other constants, as you normally do not have 2^x expressions that are derived, but 2^(a*x), where you do ln(2)*a at compile time.
You start with the formula for the exponential of an imaginary argument, but there the use of "e" is just a conventional notation. The transcendental number "e" is never used in the evaluation of that formula and also none of the numbers produced by computing an exponential or logarithm of real numbers are involved in that formula.
The meaning of that formula is that if you take the expansion series of the exponential function and you replace in it the argument with an imaginary argument you obtain the expansion series for the corresponding trigonometric functions. The number "e" is nowhere involved in this.
Moreover, I consider that it is far more useful to write that formula in a different way, without any "e":
1^x = cos(2Pi*x) + i * sin(2Pi*x)
This gives the relation between the trigonometric functions with arguments measured in cycles and the unary exponential, whose argument is a real number and whose value is a complex number of absolute value equal to 1, and which describes the unit circle in the complex plane, for increasing arguments.
This formula appears more complex only because of using the traditional notation. If you call cos1 and sin1 the functions of period 1, then the formula becomes:
1^x = cos1(x) + i * sin1(x)
The unary exponential may appear weirder, but only because people are habituated from school with the exponential of imaginary arguments instead of it. None of these 2 functions is weirder than the other and the use of the unary exponential is frequently simpler than of the exponential of imaginary arguments, while also being more accurate (no rounding errors from argument range reduction) and faster to compute.
I want to add that any formula that contains exponentials of real arguments, e^x, and/or exponentials of imaginary arguments, e^(i*x), can be rewritten by using only binary exponentials, 2^x, and/or unary exponentials, 1^x, both having only real arguments.
With this substitution, some formulae become simpler and others become more complicated, but, when also considering the cost of the function evaluations, an overall greater simplicity is achieved.
In comparison with the "e" based exponentials, the binary exponential and the unary exponential and their inverses have the advantage that there are no rounding errors caused by argument range reduction, so they are preferable especially when the exponents can be very big or very small, while the "e" based exponentials can work fine for exponents guaranteed to be close to 0.
The air quality issue alone is mind-boggling. The air quality index nominally tops out at 500, corresponding to 'hazardous.' Major Indian cities blow past this threshold on a regular basis in the winter months. In Delhi, poor air quality is responsible for one in seven deaths annually [0]. People born in Delhi now are estimated to lose 8-12 years in life expectancy, depending on the study [1]. This is the norm for now, but it's hard to imagine how much worse things can get.
I was in India for a wedding a few years back and spent a couple days in New Delhi. I remember stepping out into the 6AM brisk morning air and feeling like I was going to cough up a lung.
It tasted like what I imagine a finely aged glass of acid rain would taste like.
You know how when you open the weather app on your phone, in normal places it says things like: sunny, cloudy, rainy? The weather app just showed SMOKE (this was an actual weather report).
This is partially a result of agricultural burning in the surrounding states which is one of the fastest (and cheapest) ways to clear out the fields for the next crop.
Not to take a stance on the issue either way, but I think the author is only referring to the politics involved in building products, not the broader political/moral issue of what the company does with all of the money it earns from those products. I don't see their post as defending or even referring to the latter.
Everything is political, though. Establishing a barrier for cynicism so it doesn’t have to tackle the tough questions is understandable but it’s not that justifiable.
I think a more charitable reading of this post doesn't defend the moral aspects you're referring to, but is about much more pedestrian things like office politics.
It kinda makes me sad to see the top comment on a thoughtful piece like this expressing outrage on something the other didn't even take a stance on. I come to Hacker News to avoid this kind of rhetoric.
> I come to Hacker News to avoid this kind of rhetoric.
I think this rhetoric fits seamlessly inline with the hacker ethos, and that's one of my motivators (if not the biggest) to read through HN comments at all. It's exhausting to comprehend at times, but so are any well expressed positions in the complexity of life. Otherwise, I worry that HN will complete its transformation to become just another marketing platform for the wider tech sector, like some seem to already think it is.
I wrote this comment in a response to his second chapter, where he presents criticism of the political role of the company as cynical, and then later where he presents a perspective on some tech company anti-union behaviour being conspiratorial.
I definitely took an uncharitable reading, but man am I tired of being told big tech is neutral. I will continue to be cynical and I will continue to gnash my teeth at anyone who tells me otherwise.
This feels a bit like semantics. To get something big done you have to build consensus (e.g. on what to build and what resources to dedicate to it) and align incentives. Oftentimes these things require building relationships and trust first. I would consider all of these things to be a part of politics, but your definition seems to only include the bad stuff.
What happens to the CEO and the remaining shell of a company? Do they have to pretend to carry on just to keep up the pretense that this wasn't an acquisition? Or will they actually continue to do things?
They still own the IP and have binding contracts with Saudis for the AI data center, though it would be hard to implement anything even if they licensed it to other companies, because without the engineering talent that made it all function, it’s quite impossible to make it all work.
It's complex in a physicist's sense of the word: the equations are hopelessly complicated to solve even in very simple cases. This means it's hard to build intuition or describe in simple terms.
Quantum chromodynamics is actually pretty similar to Maxwell's equations of electromagnetism. The big difference is that unlike photos, gluons interact with each other. This means goodbye to linear equations and simple planewave solutions. One can't even solve the equations in empty space, and only recently have supercomputers become powerful enough to make good, quantitative predictions about things like the proton mass.
A key property of QCD is that unlike electrodynamics, the forces between interacting objects increase with distance (quark confinement). This is what breaks the usual style of expansions used to simplify problems. It's hard to overstate how important this is.
One of the implications is that there are many interactions where most possible Feynman diagrams contribute non-negligibly. The advances in theory arguably have much more to do with improvements in techniques and the applied math used, such as in lattice QCD and Dean Lee's group for instance.
I wonder if it is inherently complex in an information-theory framework, or that we simply haven’t yet found its “natural” basis under which its description is most succinct?
Yeah it's a great question. I don't know the answer, but I suspect the people who study it strongly suspect that it is highly complex in this sense. Otherwise they would be looking for simpler representations instead of running massive simulations.
To your question, I think there is an elegant answer actually; most composite particles in QCD are unstable. They're either made out of equal parts matter and antimatter (like pions) or they're heavier than the proton, in which case they can decay into one (or more) protons (or antiprotons). If any of the internal complexities of the proton made it distinguishable from other protons, they wouldn't both be protons, and one could decay into the other. Quantum mechanics also helps to keep things simple by forcing the various properties of bound states to be quantized; there isn't a version of a proton where e.g. one of the quarks has a little more energy, similar to how the energies atomic orbitals are quantized.
> Recently, GPT informed me that the strong force is really a tiny after-effect of the "QCD force"
This is kind of just semantics. QCD describes both the force binding quarks inside protons and neutrons, and the residual force binding protons and neutrons. This is all part of the Standard Model, which has been essentially unchanged for the last 50 years. The big theoretical challenge is to incorporate gravity into this picture, but this is an almost impossible thing to explore experimentally because gravity is very weak compared to the other 3 forces. That's why the Standard Model is so successful, even though it doesn't incorporated gravity.
What enabled this treatment to be used now? Gene editing techniques have existed for a long time, but there were many reasons why they weren't being used in humans, like concerns about off-target edits and heritability. The article mentions something about gold nanoparticles, but this aspect was developed over the course of a few weeks, and in any case these aren't new either.
Well, CRISPR-Cas9 as a tool for genetic engineering was only invented in 2012. A 13-year translation timeline is not unusual, maybe even unusually fast. CAR-T cell therapy for cancer took 30 years from discovery to clinic. It took about 30 years to go from the early attempts to use engineered lipid nano particles for drug delivery to the first FDA-approved medication using them, doxil.
With CRISPR, it took a long time to figure out how to reliably edit just the gene you want and acceptably minimize off-target edits, including by delivering the therapeutic to just the organ affected and getting the dose and release right.
The public is understandably leery about experimental medical techniques. If they had killed this newborn child with CRISPR therapy, then it might have set created a backlash delaying translation of this technique for years, possibly decades.
In biomedicine, we’re always looking for therapies that approximate the level of precision control available in software. Unfortunately, it’s never more than an approximation, and our ability to measure and predict the size of that error is always limited. That is why the field moves slowly.
Expanding the timeline a bit, CRISPR was known as a possible gene targeting/editing tool by 2008 at least - I distinctly recall learning about it then in a guest lecture.
I mean to ask 'why now?' not 'what took so long?' What about the regulations or the science let this happen now, and not 5 years ago or 5 years into the future?
reply