Hacker Newsnew | past | comments | ask | show | jobs | submit | w10-1's commentslogin

The threshold question is crossover: what Android development experience is required for Swift developers, and what Swift experience is required for Android/Kotlin developers? By saying "without touching XML, Java, or Kotlin", are you implying that Swift developers without Android experience could be successful?

Then the questions is: roughly what percentage of Kotlin or Flutter apps could be writable in Swift? Today and next year?


While AI might have amplified the end, the drop-off preceded significant AI usage for coding.

So some possible reasons:

- Success: all the basic questions were answered, and the complex questions are hard to ask.

- Ownership: In its heyday, projects used SoF for their support channel because it meant they don't have to answer twice. Now projects prefer to isolate dependencies to github and not lose control over messaging to over-eager users.

- Incentives: Good SoF karma was a distinguishing feature in employment searches. Now it wouldn't make a difference, and is viewed as being too easy to scam

- Demand: Fewer new projects. We're past the days of Javascript and devops churn.

- Community: tight job markets make people less community-oriented

Some non-reasons:

- Competition (aside from AI at the end): SoF pretty much killed the competition in that niche (kind of like craigslist).


> - Success: all the basic questions were answered, and the complex questions are hard to ask.

I think this is one major factor that is not getting enough consideration in this comment thread. By 2018-2020, it felt like the number of times that someone else had already asked the question had increased to the point that there was no reason to bother asking it. Google also continued to do a better and better job of surfacing the right StackOverflow thread, even if the SO search didn't.

In 2012 you might search Google, not find what you needed, go to StackOverflow, search and have no better luck, then make a post (and get flamed for it being a frequently-asked question but you were phrasing yours in a different / incorrect way and didn't find the "real" answer).

In 2017, you would search Google and the relevant StackOverflow thread would be in the top few results, so you wouldn't need to post and ask.

In 2020, Google's "rich snippets" were showing you the quick answers in the screen real estate that is now used by the AI Overview answers, and those often times had surfaced some info taken from StackOverflow.

And then, at the very end of 2022, ChatGPT came along and effectively acted as the StackOverflow search that you always wanted - you could phrase your question as poorly as you want, no one would flame you, and you'd get some semblance of the correct answer (at least for simple questions).

I think StackOverflow was ultimately a victim of it's own success. Most of the questions that would be asked by your normal "question asker" type of user were eventually "solved" and it was just a matter of how easy it was to find them. Google, ChatGPT, "AI Overviews", Claude Code, etc have simply made finding those long-answered questions much easier, as well as answering all of the "new" questions that could be posed - and without all of the drama and hassle of dealing with a human-moderated site.


The volume of basic questions is unlimited. There are new technologies every year.

Not sure. As software becomes a commodity I can see the "old school" like tech slowing down (e.g. programming languages, frameworks frontend and backend, etc). The need for a better programming language is less now since LLM's are the ones writing code anyway more so these days - the pain isn't felt necessarily by the writer of the code to be more concise/expressive. The ones that do come out will probably have more specific communities for them (e.g. AI)

Writing for publication is a ridiculous amount of work, smoothing and digesting to the point of pablum, because it's just hard to please everybody. Now that LLM's can tailor to chapter-level discussions, why write?

Still, that's what it takes to reach N > friends+students.

It's beyond ironic that AI empowerment is leading actual creators to stop creating. Books don't make sense any more, and your pet open source project will be delivered mainly via LLM's that conceal your authorship and voice and bastardize the code.

Ideas form through packaging insight for others. Where's the incentive otherwise?


When you have original information that hasn't been released anywhere else in the world, why would a book be a bad choice?

> all compete with each other

It's common business practice to set up internal innovation competitions, and blend the best.


You call all these products launches from google "internal innovation competitions"?

And even if they were, which they aren't, are you sure it's a "common business practice"? How many companies can afford that.


TIL the scale of bitcoin derivatives in 2020 (hence volatility): ~2T on 2B market activity. Jeepers!

--- Starting in late 2020, as shown in The Economist's graphic, the spot market in Bitcoin became dwarfed by the derivatives markets. In the last month $1.7T of Bitcoin futures traded on unregulated exchanges, and $6.4B on regulated exchanges. Compare this with the $1.8B of the spot market in the same month. ---


Why would you expect the scale of the derivatives to be related to the scale of the spot market, especially if the derivatives are cash-settled futures? One is basically gambling on the price of BTC going up or down, and the other is trading the actual BTC, right?

Well for one with a gigantic derivatives market compared to the underlying one it becomes relatively cheap to manipulate the underlying market.

If you can make a gigantic bet on the price going up and then buy a large amount of Bitcoin that moves the price up you can win from that. See the Jane street India derivatives market issue.


I dunno, ask India and Jane Street. That's the same basic situation: when the derivative market betting on the price going up or down is much larger than the market that actually sets that price, it's ripe for arbitrage/market manipulation by a player big enough to move the market (which one you think it is depends on whether you're one of the gamblers getting fleeced or the one taking their money).

How is trading the actual BTC not also gambling on the price of BTC going up or down?

It's not really, but the difference is that I'm limited by the supply of BTC, and it requires that I actually have the money to make the 'bet' at the start. That restricts the size of the spot market.

If I'm buying futures I can enter into a contract that says "I'll buy a contract for 1BTC that says BTC is going to go from $88.5k to $98.5k in 1 year." I don't actually hand over any money. In a year's time, if BTC is now $100k the person who agreed on the contract gives me $10k. If it doesn't go up then I owe the seller $10k. The futures contract is settled in cash - no BTC is involved.

Right now though, I don't have a $88.5k to spend on BTC, so the spot market isn't an option. I probably could find $10k in a year's time so a bet on a BTC future might be viable. The actual derivative 'value' isn't real though. The only money changing hands is the delta of the change in value when the contract is settled.

(Caveat: I am a total noob at finance stuff so this could be quite wrong. One of the many reasons I will not be buying that futures contract. :) )


It's very wrong. Futures contracts on traditional exchanges have no counterparty risk and require the deposit of a significant amount of upfront capital as collateral. If the spot price of the underlying moves in either direction, debits or credits are made to and from each margin account and if you don't have the money to cover a margin call, the contract gets closed.

Future markets give traders leverage of 100x sometimes or more. Margin requirements are much lower than trading spot.

Margin requirements for trading spot are zero, though initial capital requirements are obviously, well, whatever spot is.

Futures contracts aren't just pieces of paper traded between people, they are actual promises to pay for physical delivery of the underlying.

It's not surprising to me that crypto people consider them nothing more than leveraged gambling slips but that's really not how one should think about them. Personally I think crypto needs far heavier regulation than it gets.


Ever heard of liquidations?

Derivatives can be structured in a time-constrained manner that requires them to go up/down in a specific time window, thus amplifying the gains/losses. Also there's generally no way to short an asset without borrowing them with a contract to pay them back (which requires timing the market move and paying rent on the asset). This is something that options contracts solved.

Its hard for me to consider owning the underlying asset as gambling compared to owning paper bets on the future value. In the former you are owning it today, in the later you are betting only on what it will cost to own later.

You might buy BTC to actually spend it, say on paying a ransomware vendor.

We’re calling these organized criminals vendors now?

Because at that scale, the tail is wagging the dog and it is not even close.

This is the 2002 law article, before the 2006 book on networks that reflected early interest in network effects, arguing that open-source is an emergent mode of production. The same analysis could arguably be applied to the creator or influencer economy. It aged well.

  https://en.wikipedia.org/wiki/Yochai_Benkler
But he adopting the techniques of transaction cost economics (TCE) while at the same time posing straw man TCE claims (e.g., that TCE says there are only integrated firms and markets). TCE says transaction costs matter most in determining economies of transactions at high scale, and its methods can show how a broad variety of costs end up shaping activity and institutions. It also explains which innovations are disruptive (surprise: they change the transaction costs) and thus how digitalization has had such a huge impact so quickly.

The TCE analysis has become second nature in business strategy, but surprisingly rare in policy circles, its intended audience.

And his analysis in hindsight is a bit wishful. Roughly speaking, while open-source reduces coordination costs, it doesn't reduce the underlying complexity. The big open-source projects get that way through major corporate sponsorship, and they run with very clear dictatorial/oligarchic or bureaucratic decision-making if they evolve.

One of the principles of TCE methodology is to compare not ideal with real, but two actual and viable forms of organization. In this case he's projected ideal benefits without any of the real costs. That was forgivable in 2002 or even 2006, but it would be malpractice now.


I like the implication that we can have an alternative to uv speed-wise, but I think reliability and understandability are more important in this context (so this comment is a bit off-topic).

What I want from a package manager is that it just works.

That's what I mostly like about uv.

Many of the changes that made speed possible were to reduce the complexity and thus the likelihood of things not working.

What I don't like about uv (or pip or many other package managers), is that the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems. Better (pubhub) error messages are good, but it's rare that they can provide specific fixes. So even if you get 99% speed, you end up with 1% perplexity and diagnostic black boxes.

To me the time that matters most is time to fix problems that arise.


> the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems.

This is a priority for PAPER; it's built on a lower-level API so that programmers can work within a clear mental model, and I will be trying my best to communicate well in error messages.


The finding is that older diesel engines and renewables produce measurable adverse effects in microglial stem cells, but new diesel formulations in new engines do not. The implication is that policy-makers should accelerate the transition to newer diesel and abandon reusable diesel. Since Europe has been gung-ho for diesel for decades, this finding could have significant regulatory and market effects.

Appears to be cheap and effective, though under suspicion.

But the personal and policy issues are about as daunting as the technology is promising.

Some the terms, possibly similar to many such services:

    - The use of Z.ai to develop, train, or enhance any algorithms, models, or technologies that directly or indirectly compete with us is prohibited
    - Any other usage that may harm the interests of us is strictly forbidden
    - You must not publicly disclose [...] defects through the internet or other channels.
    - [You] may not remove, modify, or obscure any deep synthesis service identifiers added to Outputs by Z.ai, regardless of the form in which such identifiers are presented
    - For individual users, we reserve the right to process any User Content to improve our existing Services and/or to develop new products and services, including for our internal business operations and for the benefit of other customers. 
    - You hereby explicitly authorize and consent to our: [...] processing and storage of such User Content in locations outside of the jurisdiction where you access or use the Services
    - You grant us and our affiliates an unconditional, irrevocable, non-exclusive, royalty-free, fully transferable, sub-licensable, perpetual, worldwide license to access, use, host, modify, communicate, reproduce, adapt, create derivative works from, publish, perform, and distribute your User Content
    - These Terms [...] shall be governed by the laws of Singapore
To state the obvious competition issues: If/since Anthropic, OpenAI, Google, X.AI, et al are spending billions on data centers, research, and services, they'll need to make some revenue. Z.ai could dump services out of a strategic interest in destroying competition. This dumping is good for the consumer short-term, but if it destroys competition, bad in the long term. Still, customers need to compete with each other, and thus would be at a disadvantage if they don't take advantage of the dumping.

Once your job or company depends on it to succeed, there really isn't a question.


The biggest threats to innovation are the giants with the deepest pockets. Only 5% of chatgpt traffic is paid, 95% is given for free. Gemini cli for developers has a generous free tier. It is easy to get Gemini credits for free for startups. They can afford to dump for a long time until the smaller players starve. How do you compete with that as a small lab? How do you get users when bigger models are free? At least the chinese labs are scrappy and determined. They are the small David IMO.

Well said

Just FYI, there TOS does say that inputs from API or code use will not be stored. There is an addendum near the bottom.

Yes, and the terms are much more protective for enterprise clients, so it pays to pay. Similar to a protection racket, they (Z.ai et al) raise a threat and then offer to relieve the same threat.

The real guarantee comes from their having (enterprise) clients who would punish them severely for violating their interests, and then sliding under the same roof (because technical consistency of same service?). The punishment comes in the form of becoming persona non-grata in investment circles, applied to both the company and the principals. So it's safe for little-company if it's using the same service as that used by big-company - a kind of free-riding protection. The difficulty with that is it does open a peephole for security services (and Z.ai expressly says it will comply with any such orders), and security services seem to be used for technological competition nowadays.

In fairness, it's not clear the TOS from other providers are any better, and other bigger providers might be more likely to have established cooperation with security services - if that's a concern.


> Similar to a protection racket, they (Z.ai et al) raise a threat and then offer to relieve the same threat.

Eh? The notion of a protection racket applies when you have virtually no choice. They come on your territory and cause problems if you don't pay up. Nothing like that is happening here: The customer is going on their property and using their service.

If I offered a service for free, and you weren't paying me, I would very happily do all kinds of things with your data. I don't owe you anything, and you can simply just not use my site.

They are not training on API data because they would simply have fewer customers otherwise. There's nothing nefarious in any of this.

In any case, since they're releasing the weights, any 3rd party can offer the same service.


To emphasize the dynamics: (1) No person will migrate until most of their connectors migrate, and their connectors cannot migrate until everyone does. It's deadlock, for every thread you care about. (2) Automation in job applications and a declining job market have both made networking more essential, so there's no tolerance for lost connections, so you'd also have to solve those problems too before all would switch. (3) Even if users don't like it and could surmount the coordination costs of switching, if companies continue to rely on it, switching would be a career-limiting move; and because companies cannot signal their recruitment strategies without triggering a stampede to game their system, companies tend to keep quiet, so no company would lead an exodus.

Still, no one (outside influencers) likes how work networking and recruitment happens today, so user might do both linkedin and some new system if one created a more effective networking and recruitment mode (e.g., for some well-defined, high-value subset, like recent Stanford MBA's, YC alumni, FinTech, ...).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: