Suggest Hifi rose RS151 add upsample to DSD2048 features

I appreciate the skepticism—it’s the bedrock of good engineering. However, equating computational signal processing with ‘superstition’ because of a lack of peer-reviewed DBTs is a bit of a category error. Here’s why:

1. The ‘Audibility’ vs. ‘Measurability’ Fallacy
You’re asking for ‘proof’ of audibility, but in engineering, we first solve for signal integrity. If a short, steep filter in a standard ESS or AKM chip creates 1.5ms of pre-ringing, that is a measurable temporal distortion. Whether an individual ‘hears’ it as a mosquito fart or a blurred snare hit is subjective, but the distortion of the original impulse is an objective fact. Using HQPlayer to offload to a 1-million-tap filter isn’t ‘juju’; it’s choosing a mathematically superior reconstruction of the original waveform that a $5 internal DAC chip simply lacks the taps to execute.

2. The ‘Bit-Perfect’ Misconception
You mentioned the streamer’s job is to provide a bit-perfect signal. That’s true for transport, but reconstruction is not bit-perfect by nature—it’s an estimation of the continuous analog wave between those bits. By upsampling to DSD2048, we aren’t ‘inventing’ data; we are providing the DAC with a signal that requires almost zero filtering in the analog domain. A ‘well-implemented’ 44.1 filter still has to deal with the brick-wall transition at 22.05kHz, which introduces phase shift and ringing. DSD2048 moves that transition to several Megahertz. It’s not about frequency response; it’s about impulse response.

3. dCS, Chord, and the ‘Jewelry’ Argument
Dismissing dCS or Rob Watts’ work as ‘audibly indistinguishable from a cell phone’ is a bold claim that ignores the fundamental difference between off-the-shelf Delta-Sigma modulation and custom FPGA-based noise-shaping. If the industry-standard chips were ‘perfect,’ companies wouldn’t spend millions developing custom silicon to bypass them. The goal is to minimize the noise floor modulation—something standard chips struggle with under dynamic loads, regardless of their static SINAD measurements.

4. The ‘Blind Test’ Challenge
To answer your question: Can I tell the difference in a controlled DBT? On a resolving system with high-transient material (like percussion or orchestral strings), the difference isn’t in ‘EQ’—it’s in the spatial reconstruction and the decay of notes. Standard PCM often feels ‘compressed’ in time; high-rate DSD feels ‘open.’ You call it superstition; I call it reducing the computational compromises of 1980s-era digital standards.

We can argue about the ‘price-to-performance’ ratio all day—and I’d likely agree that the costs are astronomical—but calling the math behind high-tap filtering and high-rate modulation ‘snake oil’ is simply ignoring the DSP reality of how we turn numbers back into music.

Cheers mate :slight_smile:

1 Like

This still completely ignores whether using a million, a billion, or a quadrillion taps results in any audible improvements or not. Empirical evidence points at the “not.”

Except that you are, by definition of upsampling. And you can definitely invent the data that sounds different.

Any decent modern DAC (which would be Delta-Sigma) already upsamples. Nice restating of vendor’s thesis, but there is very little to indicate that it is based in any audible reality.

Fundamental difference: you can charge people far more for the former.

And this is exactly the argument cable makers use. Just because some audiophiles are willing to pay tens of thousands of dollars for some (yes) ju-ju allegedly lifting veils and blackening blacks does not mean that it really does. Or that “millions” (it’s quite unlikely that boutique manufacturers, no matter how great their margins are, have anywhere near that kind of money; it’s ESS’s and AKM’s that can spend so much designing an optimized upsampling and an ASIC to run it properly) are actually spent on any research beyond finding what pseudo-scientific verbiage would move more boxes.

Example, please. And no references to Rob Watts’ bleating about audibility of something at -300dB.

So, no difference (what the hell is "spatial reconstruction anyway?). And any time “resolving system” is mentioned you already know that it’s something with frequency response of a saw blade and more distortion that a 1960 single-ended triode kit.

You’re assuming all upsampling is equal, which is a fundamental technical oversight. Yes, ESS and AKM make great ASICs, but they are built for mass-market efficiency, not ultimate performance. Their internal filters use limited ‘taps’ (computational steps) to save power and space. If the ‘ideal’ conversion were already solved by these cheap ASICs, why do professional recording studios still invest millions in high-precision clocking and custom AD/DA interfaces?

When we move to DSD2048 via high-power software, we aren’t ‘inventing’ data; we are using mathematically superior interpolation to more accurately reconstruct the original analog curve between samples. This isn’t ‘bleating’ about -300dB; it’s about reducing time-domain errors (aliasing and ringing) that occur when you use the ‘good enough’ filters inside a $20 DAC chip. Dismissing this as ‘cable ju-ju’ shows a lack of distinction between passive accessories and active, verifiable digital signal processing (DSP).

If there were truly ‘no upside’ to different filter topologies, every high-end manufacturer—from dCS to Chord to MSB—would just use a stock off-the-shelf chip and call it a day. They don’t, because how you handle the reconstruction filter directly impacts the impulse response.

High-rate DSD upsampling allows the DAC’s hardware to operate in its most linear range with a much simpler analog low-pass filter. This isn’t about ‘mutant radioactive bats’; it’s about phase coherence and transient snap that our brains use for spatial cues. If you can’t hear the difference between a shallow, ringing stock filter and a high-tap, linear-phase reconstruction, that’s fine—but don’t claim the engineering behind it doesn’t exist just because it doesn’t fit your ‘paper and pencil’ model.

Comparing a computational DSP breakthrough to ‘$10K power cables’ is a classic straw man. One has no measurable effect on the signal; the other demonstrably changes the waveform’s timing and noise floor profile.

You demand examples of ‘imprecise’ conversion? Look at the impulse response of any standard 44.1kHz slow-roll-off filter versus a high-rate DSD reconstruction. The pre- and post-ringing are night and day. If you believe timing errors in the millisecond range are ‘inaudible’ while we evolved to localize sound with microsecond precision, then we simply have a different understanding of human biology. I’ll stick with the technical evolution that brings the music to life.

Cheers Mate :slight_smile:

That assumes something very much not in evidence – that using more taps actually results in any audible difference. Let alone “ultimate performance.”

Studios use external clocks to keep multiple pieces of equipment in sync, not because an external clock made any improvement in the sound. And they definitely are not doing anything with DSD2048. THe only place where you will find it would be from someone like NativeDSD who just runs the original recording through a pro version of HQPlayer so they can sell it for more money.

Ironically, if you get their sampler of the same recording at multiple PCM and DSD rates, the only audible difference is how loud the cha-ching of the cash register goes if you decide to buy the higher rate one…

That’s… not at all how it works.

That’s definitely what vendors want you to believe. Sounds very much like what MQA was trying to sell with their filters (which also supposedly did wonders for aliasing and pre-ringing). Alas it misses that last, but most important piece – about verifiable differences.

And how would they charge the gullible tens of thousands of dollars then?! You are sadly confusing real, verifiable, objective differences and marketing of Veblen goods.

Nope. Wake me up when someone can actually tell a Chord from a phone dongle under identical test conditions.

They don’t for one and only one reason – because by saying “we gots moar!” they can sell you a far more expensive device (with astronomical margins, too).

If you were talking abvout something broken by design, like a NOS DAC, it would be true. But you are (it seems) not.

What breakthrough?!

And could you at least decide which of the snake oil arguments are you running away with? Is it “timing” (we already had MQA for that)? Or noise floor (a decent phone dongle already has noise floor below anything that would not be swamped by amp and speakers)?

Nobody is forcing you to use the worst possible filter. Unless you are a Stereophile reviewer and haven’t heard anything above 10KHz in a few decades, in which case you don’t care, you just don’t use slow roll-off.

And I will stick with something that has audible effect, not just makes a less than honest vendor laugh all the way to the bank.

It’s interesting that you equate complex Digital Signal Processing (DSP) with ‘snake oil’ accessories like fuses or cables. One is a passive component; the other is active, high-order mathematics.

  1. On Taps and Filters: You ask for evidence of audibility? The ‘not’ you refer to is usually based on steady-state sine wave measurements. However, music is a series of transients. Higher tap counts and high-rate upsampling (like DSD2048) allow for filters that minimize time-domain smearing (pre-ringing) without compromising frequency response. Our brains utilize microsecond timing cues for spatial localization—something a $20 DAC chip’s simplified filter inherently compromises due to limited silicon real estate.
  2. On Professional Gear: Your claim that studios only use clocks for ‘sync’ is a partial truth that misses the bigger picture. In a high-end AD/DA chain, jitter rejection and clock stability directly impact the aperture uncertainty during conversion. If the ‘cheap ASIC’ solved everything, engineers wouldn’t spend $10k on a Lavry or an Antelope converter just for ‘sync’—they do it because the analog reconstruction is demonstrably superior.
  3. The MQA Straw Man: Comparing DSD2048 to MQA is a false equivalency. MQA was a lossy, proprietary compression scheme. DSD2048 upsampling is an open, transparent computational process aimed at moving digital artifacts as far away from the audible band as physics allows. One was about licensing; the other is about maximizing the potential of the hardware.
  4. Timing vs. Noise: It’s not an ‘either/or’ argument. They are linked. Proper high-rate modulation improves both the noise floor in the audible band and the impulse response in the time domain. If you can’t tell a Chord from a dongle, that might be a limitation of the downstream gear or personal preference, but dismissing the engineering as ‘fraud’ ignores decades of AES research into filter topologies.

At the end of the day, innovation happens at the ceiling, not the floor. If everyone settled for ‘good enough,’ we’d still be listening to 128kbps MP3s because ‘the math says it’s enough.’ I’ll choose to explore the breakthrough.

Cheers Mate :slight_smile:

1 Like

When either exists solely for parting the gullible from their money, both are nothing but snake oil.

Funny how this rather meaningless sequence of fancy-sounding but completely meaningless words taken straight from vendors’ marketing materials completely omits the one thing it promised – any evidence of audibility. For a good reason though. Try as he might, even HQPlayer’s developer has not been able to come up with one.

Jitter has not been a problem, especially in studio environments where audiophile crap like I2S and such is avoided like plague, for decades. And if clocks are synchronized (still the only purpose of external clocks in professional environment) even stability does not matter all that much. Certainly not with clocks used even in those $20 dongles. Even dCS does not bother with overkill clocks that third-rate vendors at the level of Lumin, Rose, or EverSolo use. But then dCS, as overpriced as it is, at least employes actual engineers that know how DACs work.

Funny how that “demonstrably superior” superiority is never demonstrated here. And anyone who knows actual engineers (recording or otherwise) would know that they play pro-level money for pro-level service, not because of some mythical superiority.

Uh huh. Just go ask Juicy for details of that open, transparent upsampling. You may learn a lot of new Finnish words. None of them suitable to be used around women and children.

MQA presented exactly the same rationale of reduced ringing, better filters, etc. etc. Just like DSD4096 it provided neither, and just like DSD8192 it was only about selling stuff. Being run by slightly better businessmen, MQA at least managed to get their finger into both consumer and producer sides of the market though, at least for a while.

Claims not in evidence.

Only a limitation of not looking at the price tag of the device when auditioning. I had to study more math and physics in high school than the entire staff of Stereophool encountered in their entire lives, combined. I don’t buy bullcrap.

Even Hans Buttheadszen can be an AES member. Decades of actual research, published by AES and others, do not seem to indicate that existing filters, properly applied (so excluding slow roll-off and stuff) are inadequate for audio purposes.

And no, e.g. Chord is not “fraud” in the sense that Chord DACs actually do the DAC thing. They even do it almost as well as a decent Chinese DAC built with quality parts, just for much more money (although they tend to not do any DSD at all). Well, neither is Audemars-Piguet engineering a scam – their watches show time, accurately enough. That they do not do it nearly as well as a Casio at 1/1000 of the price is another issue though.

Quite rarely. Bringing good product to mass production is much greater engine of progress than building Veblen goods with no intrinsic performance advantages to a niche audience.

Yet another strawman out of ignorance. Low rate MP3s are very demonstrably audibly different. Measurably, too. A high rate MP3 already totally stumps any audiophile and takes effort for a trained professional to distinguish. Anything at a rate of Red Book or higher is not, even though instruments might pick up some measurable difference. On the fly upsampling before DAC at best provides an audibly identical result, despite defeating correctly selected upsampling algorithms, and at worst alters the signal enough (by mangling it with misapplied filters) that does sound different. Unfortunately at that point it is farther away from the original. Might as well use a “resolving” tube amp then.

It’s clear we have a fundamental disagreement on where ‘engineering’ ends and ‘marketing’ begins, but let’s address the actual science you’re dismissing:

  1. The Casio vs. Mechanical Watch Fallacy: Your analogy is flawed because a watch has a singular, static job: counting seconds. Digital-to-analog reconstruction is a dynamic process of high-speed interpolation. A $20 dongle uses a generic, short-tap filter because it lacks the MIPS to do anything else. Moving to DSD2048 allows us to use filters with millions of tapsthat achieve a level of stop-band rejection and phase linearity that is mathematically impossible on an ASIC chip. That’s not ‘Veblen marketing’; that’s computational horsepower applied to a math problem.
  2. Professional Standards: You claim engineers only use external clocks for sync—tell that to the engineers at Abbey Road or Skywalker Sound who use Grimm or Antelope master clocks specifically to reduce aperture jitter in the conversion stage. If ‘good enough’ were the industry standard, we’d still be using 1990s AD/DA tech. The fact that pro-level gear costs pro-level money is because precision at the physical limit is expensive to engineer.
  3. The ‘Transparency’ of the Original: You argue that upsampling moves us ‘further from the original.’ On the contrary, the ‘original’ digital file contains imaging artifacts and ultrasonic noise created by the sampling process itself. The goal of DSD2048 isn’t to ‘change’ the music, but to provide the cleanest possible reconstruction by moving the digital artifacts far beyond the analog filter’s slope. If you prefer the sound of a standard chip’s steep, ringing filter, that’s your ‘correct’—but it’s not the mathematical ‘ideal.’
  4. Scientific Proof vs. Experience: You lean heavily on ‘no evidence of audibility,’ yet the entire history of high-end audio is based on the fact that steady-state THD measurements don’t capture transient response or spatial reconstruction. If you’re happy with a phone dongle and a high-rate MP3, I genuinely envy your savings. But for those of us who hear the difference in depth and ‘air’ that high-rate modulation provides, the benefit isn’t a ‘myth’—it’s the reason we pursue the hobby.

We can go in circles about ‘Juicy’ or MQA all day, but it won’t change the fact that computational audio is the frontier of the industry. Dismissing it all as a scam might feel intellectually superior, but it ignores the very real innovations happening in signal theory.

Cheers mate :slight_smile:

1 Like

Indeed. Engineering deals with measurable and verifiable data. Marketing deals with attractive-sounding claims that sell gear. You chose to believe the latter.

Both have one job – taking some input (energy and passage of time itself in one case, energy and bits from the input buffer in the other) and generating output (movement of hands or shape of digits on the screen, or analogue electric signal) as close to the ideal as possible. Adding meaningless flourishes about “high speed” (nothing about audio is “high speed” for modern electronics) or “dynamic” does not change that. Whether one has been stamped in a factory somewhere in China and the other hand-assembled by highly trained mountain zwergs might affect their price, rarirty, and possibly aesthetic appeal. It does not predict anything about how well thery do their job (well, in reality the factory stamped one does it much better…)

Even more so, it lacks any demonstrable need to do “anything else.”

This just shows first complete unawareness of what ASICs are and can do, and second assumes the completely unproven (in reality, disproven) assertion that those billions and billions of taps are necessary. This is nothing more thna FUD from Juicy who (surprise, surprise!) wants to sell you those taps.

Aperture jitter (do you even know what it is?) does not care whether you are using an external clock or not, it only cares that your clock is accurate enough. Which built-in clocks are.

Nope, it’s because getting a replacement unit into a studio same day the old one broke is expensive. So is designing to pro-level duty cycle.

You do realize (probably not :slight_smile: ) that music run through several dozen cycles of ADC/DAC through a prosumer-level audio interface from Guitar Center is completely indistinguishable from the original? So, more BS again…

Firstly that’s not what I am arguing, and secondly if the upsampling does not alter the original sound, it’s no different from upsampling that the DAC already does anyway.

Sure, because “high-end audio” is the industry that sells you fuses and cables. No amount of meaningless nonsense about some mythical “spatial reconstruction” (not something, whatever it is, that any DAC does) changes that any more than you not knowing how DAC performance is measured and imagining that there is some magic “transient” thing that no one at ESS ever heard about that only Joosy can rteveal. If you send him money.

And you are being dishonest again, always changing the argument from evidence of audibility that nobody (including Juice) had ever produced to some meaningless hand-waving about spatial something or another.

Except that you don’t hear that any more than anyone else. If you did, yeah, there’d be that evidence of audibility. Joozie would be all over the globe running ABX tests showing how much HQPlayer improves the sound. He does not do that any more than any cable maker agrees to a blind test of their cables. And for exactly the same reason. Under controlled conditions the whole house of cards comes tumbling down.

There are lots of interesting things happening there, and in 2026 a device (like Rose) that does not offer even a basic PEQ, to say nothing about proper room/speaker correction, is obsolete and well behind even a $300 Wiim in any practical sound quality. But upsampling to some ridiculous rate DSD has no more to do with this than whatever (very impressive!) research Pathek Philippe does into making watch complications has to do with accurate timekeeping.

If you’ve bought that one DAC that supports DSD2048 and found that it sounds like crap without being fed some preprocessed signal that you need a $3000 PC and an HQPlayer license to make is sad, but that really was to be expected when you buy a snake oil product designed purely for putting “supports DSD2048” in marketing materials.

MusicLover,

I love your explanations of the engineering principles involved in designing DACS. They informative and erudite.

StandardModel

It’s fascinating how you oscillate between claiming to be an ‘engineer’ and then dismissing the actual computational math behind signal reconstruction as ‘fancy words.’ Let’s settle the technical record:

  1. The Tap/ASIC Reality: You claim ASICs are ‘optimized’ and that billions of taps are unnecessary. In reality, the 32-tap to 256-tap filters inside a $20 ESS or AKM chip are economic compromises, not technical ideals. They are constrained by thermal limits and die space. Using a powerful CPU to run a million-tap poly-sinc filter isn’t about ‘creating data’; it’s about achieving a much steeper stop-band rejection and phase linearity that a tiny chip simply cannot compute in real-time.
  2. **Professional Gear & Clocks:**Your claim that studios only use clocks for ‘sync’ is factually incorrect. High-end mastering houses use external master clocks to minimize aperture uncertainty during the AD/DA process, which directly affects low-level detail and transient snap. If ‘standard sync’ were all that mattered, the multi-billion dollar pro-audio converter industry would have collapsed decades ago.
  3. **The MQA False Equivalence:**You keep bringing up MQA, which was a lossy, proprietary ‘black box.’ DSD2048 upsampling is a transparent reconstruction process that shifts quantization noise into the multi-MHz range, allowing the analog output stage to use a much simpler, cleaner filter. This is basic Delta-Sigma modulation theory, not marketing.
  4. Audibility & Human Perception: You demand ‘ABX proof’ while ignoring that our hearing is sensitive to microsecond timing errorsthat don’t show up on a static 1kHz THD graph. If you can’t distinguish between the ringing of a shallow chip filterand the transparency of a high-rate DSD stream, that’s your experience. But don’t confuse your own hearing threshold with the limits of modern signal processing.

If ‘good enough’ were the rule, we’d still be using 8-bit audio. Innovation happens because people look past the ‘paper and pencil’ limits. If you’re happy with your $300 Wiim, enjoy it—but those of us pursuing the technical ceiling will keep moving forward. Cheers!

There is a difference between understanding the “engineering” and whether it is needed and blindly repeating marketing copy as you are doing.

Shouldn’t you know the technical side first? And no, reading Signalysts’ ad does not count.

Which they are.

If they were, you’d present some proof. Or Juicy would. Not of them being theoretically better for some idealized processing but audibly in practice. Neither one of you ever does though.

[quote=“MusicLover, post:32, topic:14427”]
Using a powerful CPU to run a million-tap poly-sinc filter [/quote]
lets you talk about using a “million tap poly-sinc filter” as if you understood what it was.

Upsampling by an integer ratio is a rather trivial operation that, lucky for us, can be and is done perfectly well by a “tiny chip.” Maybe you should actually read the links you post.

So you do not understand that whole “aperture” thing again. To minimize it an accurate and stable clock is the only thing that is needed. Since the studios do need to sync devices, they use an external clock. If you do not have multiple devices and an ADC with decent clock, external one does zilch.

Even the $100K/meter power cable industry is not collapsing. This is a bogus argument.

Yeah, again, go ask Jussy for the algorithm.

And that’s why we Delta-Sigma DACs that upsample everything already. Adding DSD2048 into the mix does not bring the benefit you claim.

You keep repeating something about “microsecond timing” that you’ve read on the back of HQPlayer box. Nobody listens to THD graphs. As mentioned before, music (let’s repesat that again, an actual music redcording) ran through dozens of ADC/DAC cycles on consumer level gear can not be distinguished from the original.

If you were arguing in good faith (which you are not) you’d try to find some examples of doing DSD8192 upsampling sounding different and being distinguishable from a Red Book original. But you keep deflecting to some bleating about ringing of test signals. (Not so surprisingly) Juzzi or dCS guys aren’t enthusiastic about engaging in blind shoot-outs either. They do know enough to realize that their fancy wares will not be picked in a blind test.

Except that that’s the whole point. We have enough technology to see planets orbiting stars many light-years away. Do you demand that your TV employ same processing because you can see those planets with a naked eye and want to see them when your TV shows a picture of night sky? Oh, actually you probably would…

Ironically, that’s exactly what audiophiles do paying top dollar for R2R NOS DACs.

Not surprisingly though, your analogy falls flat again, because 8 bit is (drumroll, please) audibly different. 16/44.1 bit recording can be distinguished from an 8 bit one even by a Stereophile reviewer. Anything above that though, including your DSD2048 can’t.

And that’s how we get Nordost cables and Synergistic fuses.

Should have gotten those instead of Roses, really, other than the smaller screen, Wiim is superior to Rose in every respect…

And isn’t it odd that the only person finding your copypasting of HQPlayer marketing has early onset dementia? Rather sad, really…

It is clear this you have moved from a technical debate into personal insults, which usually happens when someone’s logical loop has run dry. You stuck on the idea that “Redbook (16/44.1) is the human limit,” a common 1980s-era stance that ignores 40 years of advancement in psychoacoustics and digital filter theory .

It’s telling that when the technical arguments for high-order processing get too complex, you resort to personal jabs. Let’s bring this back to the actual science you claim to support:

  1. The ‘16/44.1 is Enough’ Myth: You argue that anything above Redbook is indistinguishable. This ignores the Nyquist-Shannon reality: while 44.1kHz covers the frequency range, the steep ‘brick-wall’ filters required to prevent aliasing at that rate create massive time-domain ringing. DSD2048 upsampling moves that filter transition so far up the spectrum that the impulse response becomes virtually perfect. If you can’t hear the difference in transient snap, that’s fine—but the math showing the reduction in pre-ringing is indisputable.
  2. ASIC vs. High-Tap Software: You call integer upsampling ‘trivial.’ If it were, DAC manufacturers wouldn’t spend millions developing FPGA-based custom code (like Chord, dCS, or PS Audio). They do this because the fixed-function filters in a $20 ESS chip are ‘one-size-fits-all’ compromises. Using a powerful CPU to run a million-tap poly-sinc filter provides a level of mathematical precision that a tiny silicon die simply cannot execute without melting.
  3. The ‘Pro-Audio’ Fallacy: Your claim that 20 cycles of AD/DA are ‘indistinguishable’ is a classic internet trope that ignores signal degradationand jitter accumulation. If that were true, the mastering industry wouldn’t exist. High-end converters aren’t about ‘sync’ alone; they are about transparency in the reconstruction stage. If a Wiim sounds the same to you as a high-end stack, I honestly envy your ears—you’re saving a lot of money.
  4. Innovation vs. Veblen Goods: Equating Computational Audio (active DSP) with Fuses/Cables (passive accessories) is a desperate false equivalency. One changes the digital reconstruction of the waveform; the other does nothing. If you can’t distinguish between a signal processing breakthrough and a piece of wire, then further technical discussion is pointless.

You’re happy with ‘good enough’ and 1980s standards. I prefer to explore the actual technical ceiling of what modern processing can achieve. We’ll have to leave it there. Enjoy the Wiim!

Cheers mate :slight_smile:

One more thing I found very interesting is that I’m proposing constructive feedback to Hifi ROSE to enhance and improve their product. However, you act like a hater.

It’s clear this is no longer a technical discussion about DSD2048 or signal processing, but rather a display of personal bias against specific brands. When the arguments shift from math to insults like ‘dementia,’ the conversation loses all value. Hope to enjoy the sonic breakthrough of DSD2048 on my Rose in the coming future while you enjoy your Wiim. To each their own.

Cheers mate :slight_smile:

@ROSEHAN

I hope engineer can set up a project plan to implement upsampling to DSD2048.

Best and regards

It has been fun reading the back and forth between you and Boris. Admittedly I don’t understand half of what either of you is saying. I am ok with that.

I know you are new to the forum and have not had time to read many of the posts here. But if you take the time to read what other members have posted you will soon realize that the functionality that you desire is not even close to being on the radar, nor should it be. Rose’s priority is, and should be, to just make its products function as advertised, and they have enough on their plate already.

2 Likes

The fiscussion would be far more pleasant if you were to stop lying and arguing something entirely disconnected from what you are being presented with…

Citation, please. Oh, but there won’t be one, because even 40 years of advancement in filter theory won’t make you hear above 20KHz or so, no matter how much you’d want to believe that.

If they did (but you are lying about it) people would be able to tell Red Book from Hi Rez. But they can’t. It is possible that with some “cutting edge” very audiophile NOS DAC this does happen, but even a basic TI chip can handle this just fine, for any audible purposes.

DSD2048 moves it far past “up the spectrum” and into “up the wazoo” territory.

The issue is that you can’t hear it either.

And Synergistic would not spend… well, probably at least a hundred dollars researching what color to paint their next audiophile fuse.

Meanwhile PS Audio is going ESS (there’s a rumor that for the first time in company’s history they have finally hired one engineer), Chord products range measure between “much worse” and “almost as good as ESS/AKM”, and both Chord and dCS will throw you out of the door if you even suggest a blind test. Because they know the results.

It’s pretty hard to believe that Chord would even have “millions” to spend on reimplementing what ESS does at $100 for 100 times more. On the other hand ESS, AKM, or TI, those people definitely did spend millions on R&D.

If “jitter accumulation” were a thing, and there were enough signal degradation, then… oh, right, people would be able to hear the difference between the original and the result of 40 conversions. Assuming that you are not senile like the other guy, you can’t not understand this point, so you are just outright lying.

Yup. And so are directonal audiophile Ethernet cables.You keep repeating the same nonsense but pretend to ignore that there is no evidence that it does anything, and plenty of evidence that it does not.

It’s what between them that counts. If it is gullible enough, sure, it will believe that a high-end stack sounds better. But not as well as a high-end stack with a $150,000 power cord, some cable lifters, and Bybee quantum dots.

Absent any evidence of audibility, they are exactly the same.

While a technical discussion with someone as intellectually dishonest as you is, indeed, pointless, it is rather entertaining to see how you keep trying to wiggle out of showing that those “computational breakthroughs” have any audible effect but only kep vdigging yourself even deeper.

You’re a Rose owner, so you’re happy with not even good enough :rofl:

Except that not knowing what is actually possible, you invest in DSD2048 snake oil that has no benefit instead of something that does improve sound, like a proper room correction system.

I will bet a box (that’s a box of bottles, not a cardboard box Americans put their wine in) of good Riesling that in a blind test you will prefer a Wiim over whatever it is that you have gotten that does not even work properly without external upsampling.

All due respect , you now relying on the “Blind Test / ABX” argument as a universal shield to dismiss any engineering you don’t personally value. You are also conflating transient response (timing) with frequency response (20kHz limit), which is a common misunderstanding in digital audio theory.

It’s fascinating how you keep retreating to the 20kHz limit while ignoring that time-domain resolution is a completely different metric of human hearing. Let’s correct the record one last time:

  1. Timing vs. Frequency: Human hearing isn’t just about a sine wave at 20kHz; it’s about inter-aural time differences (ITD). We can detect timing shifts in the microsecond range, which is far below the sample period of 44.1kHz. High-rate upsampling and million-tap filters aren’t for ‘hearing ultrasonic frequencies’—they are for minimizing phase shift and ringing in the audible band. That is signal theory, not marketing.
  2. The ASIC Compromise: You claim ESS/AKM have ‘millions’ for R&D, and they do—but their goal is mass-market efficiency. A chip designer has to balance performance with power consumption and heat. Software-based upsampling to DSD2048 has no such constraints; it uses the massive GFLOPS of a modern CPU to execute math that a $20 silicon die physically cannot. To call this ‘trivial’ is to ignore the reality of hardware limitations.
  3. The ‘Pro’ Fallacy: If 40 cycles of AD/DA were truly ‘indistinguishable,’ the high-end converter market would be a ghost town. Professionals use gear from Merging Technologies or Antelope because transparency matters when signals are processed. Your ‘Guitar Center’ example is the definition of ‘good enough’ for hobbyists, but it is not the standard for high-fidelity reconstruction.
  4. Room Correction vs. Upsampling: This isn’t a zero-sum game. A serious enthusiast uses both room correction (for acoustics) and high-rate upsampling (for reconstruction accuracy). Dismissing one because of the other is a false choice.

You are convinced that if a measurement isn’t on a basic THD chart, it doesn’t exist. I prefer to trust the engineering that addresses the temporal accuracy of the music. Since you’re so confident that a Wiim is the ceiling of audio performance, I’ll leave you to your Riesling. Some of us prefer to keep pushing the envelope.

Cheers mate :slight_smile:

As an owner of RS151, I actively provide feedback or suggestions to HIFI ROSE. While I recognize that HIFI ROSE may not prioritize my feedback or opinions , I have made an effort to contribute. I have no regrets. If thousands of owners giving same feedback , may be they will address this concern (hopefully).

In the real world, it is advisable to never say never. It is possible that the developers of RS151 are not currently working on it, or that it may be included in the new model in the future.

Cheers mate :slight_smile:

1 Like

Music Lover,

The points that you are reiterating are completely accurate from an engineering point of view. I spent fifty years in patent law and the old lawyer adage applies. When you have the facts, argue the facts, when you have the law in your favor, argue the law. When you have neither, pound on the table.
The responses to your positions sound like a lot of table pounding. “Sound and fury signifying nothing”.

StandardModel

1 Like