-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gzip file created with async compression not decodable #135
Comments
I see two issues with your code:
Fixing those issues I see that it gives correct output: use tokio::io::AsyncWriteExt;
use async_compression::tokio::write::GzipEncoder;
#[tokio::main]
async fn main() -> std::io::Result<()> {
let mut writer = GzipEncoder::new(Vec::new());
writer.write_all("test".as_bytes()).await?;
writer.shutdown().await?;
tokio::io::stdout().write_all(&writer.into_inner()).await?;
Ok(())
} > cargo run | gunzip
Compiling foo v0.1.0 (/tmp/tmp.xrCjHsQa0S/foo)
Finished dev [unoptimized + debuginfo] target(s) in 0.77s
Running `/home/nemo157/.cargo/shared-target/debug/foo`
test (the encoded stream is still different, but maybe gzip has multiple valid encodings of the same data 🤷) |
I ran into the same issue, the above piece of code would be great as an example on docs.rs! |
But there is the same issue with the following code: let mut file = tokio::fs::File::create("file.lzma").await.unwrap(); It gives no panics, but: $ lzma -t file.lzma Without compressor tokio::io::copy works fine, file is complete. Of course, it fails also with any other compressor. P.S.: And I just discovered that tokio::io::copy is not consuming compr, and compr.shutdown() after all of that activity really helps. |
Hello,
I try to create a stream compressed gzip file on the fly while receiving chunks of data.
My issue is that the file cannot be decoded from gzip/gunzip :-(
I stripped it down to a test encoding that just encodes "test" to see what the difference is compared to a Node.js based test encoding that works. But I do not get why there is a difference, and if this is really a bug inside the library, or an issue of how i use the async file io.
Here is my sample code:
Result:
This file is not decodable! Gunzip says it is corrupt!
When I create the file via this small node.js script
the resulting file has the following content:
The issue is not the missing checksum and file length:
I added checksum and length via gzip-header create. The file is still not decodable via gunzip
The interesting bit seems to be the encoded stream, they differ between async-compression and the working Node.js:
RUST: 2a49 2d2e 0100 0000 ffff
Node: 2b49 2d2e 0100
Why are there 4 tailing bytes 0000 ffff ?
And why is the first byte different?
The text was updated successfully, but these errors were encountered: