DragonFly BSD
DragonFly kernel List (threaded) for 2013-07
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

[GSOC] HAMMER2 compression feature week4 report


From: Daniel Flores <daniel5555@xxxxxxxxx>
Date: Sat, 13 Jul 2013 21:09:35 +0200

--089e0149397a39ac1704e1695e61
Content-Type: text/plain; charset=ISO-8859-1

Hello everyone,

Here is my week 4 report. In my previous mail I mentioned that I was
working on write path. Now the write path seems to be complete together
with the read path. Compression/decompression functions are there too and
they seem to work, but the feature itself doesn't quite work yet.

What happens is that even though the functions are present there and they
seem to do their job, when I try to open the compressed files stored on
HAMMER2 drive, they aren't quite the same as the originals stored on
another drive. They can be opened correctly at times, and the contents are
readable sometimes too, but it is clear that something goes wrong
somewhere. I'm not sure yet whether the problem occurs during compression
stage, decompression stage, or both, but this has to be fixed. I expected
something like this to happen, so starting this weekend and the next week
I'll be working on debugging this and, hopefully, the results will be
positive soon.

Another issue is the efficiency. While I was building those write and read
paths, my main goal was simply to get them to work. Now that they seem to
work generally, I must make them more efficient than they are now. The main
issue currently is that in order to perform compression and decompression
they use a buffer that must be allocated before performing the task. The
data goes like this: logical block -> buffer -> physical block for
compression and physical block -> buffer -> logical block for
decompression. It would be much better to go directly from one block to
another and avoid to use that buffer altogether. The good news is that even
those inefficient paths are quite fast. The delay is noticeable, but it's
not too big for the feature to be unusable. So, hopefully, with the further
optimizations, there won't be any noticeable delay at all compared with the
case when there isn't any compression performed on files.

So, my next task will be debugging the current paths and once they will
work correctly, I'll be working on optimizing them.

My code is available in my repository, "hammer2_LZ4" branch [1]. I'll
appreciate any suggestion, criticism and other feedback.


Daniel

[1] git://leaf.dragonflybsd.org/~iostream/dragonfly.git

--089e0149397a39ac1704e1695e61
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello everyone,<div><br></div><div>Here is my week 4 repor=
t. In my previous mail I mentioned that I was working on write path. Now th=
e write path seems to be complete together with the read path. Compression/=
decompression functions are there too and they seem to work, but the featur=
e itself doesn&#39;t quite work yet.</div>
<div><br></div><div>What happens is that even though the functions are pres=
ent there and they seem to do their job, when I try to open the compressed =
files stored on HAMMER2 drive, they aren&#39;t quite the same as the origin=
als stored on another drive. They can be opened correctly at times, and the=
 contents are readable sometimes too, but it is clear that something goes w=
rong somewhere. I&#39;m not sure yet whether the problem occurs during comp=
ression stage, decompression stage, or both, but this has to be fixed. I ex=
pected something like this to happen, so starting this weekend and the next=
 week I&#39;ll be working on debugging this and, hopefully, the results wil=
l be positive soon.</div>
<div><br></div><div>Another issue is the efficiency. While I was building t=
hose write and read paths, my main goal was simply to get them to work. Now=
 that they seem to work generally, I must make them more efficient than the=
y are now. The main issue currently is that in order to perform compression=
 and decompression they use a buffer that must be allocated before performi=
ng the task. The data goes like this: logical block -&gt; buffer -&gt; phys=
ical block for compression and physical block -&gt; buffer -&gt; logical bl=
ock for decompression. It would be much better to go directly from one bloc=
k to another and avoid to use that buffer altogether. The good news is that=
 even those inefficient paths are quite fast. The delay is noticeable, but =
it&#39;s not too big for the feature to be unusable. So, hopefully, with th=
e further optimizations, there won&#39;t be any noticeable delay at all com=
pared with the case when there isn&#39;t any compression performed on files=
.</div>
<div><br></div><div>So, my next task will be debugging the current paths an=
d once they will work correctly, I&#39;ll be working on optimizing them.</d=
iv><div><br></div><div>My code is available in my repository, &quot;hammer2=
_LZ4&quot; branch [1]. I&#39;ll appreciate any suggestion, criticism and ot=
her feedback.</div>
<div><br></div><div><br></div><div>Daniel</div><div><br></div><div><font fa=
ce=3D"arial, sans-serif">[1] git://<a href=3D"http://leaf.dragonflybsd.org/=
~iostream/dragonfly.git">leaf.dragonflybsd.org/~iostream/dragonfly.git</a><=
/font><br>
</div></div>

--089e0149397a39ac1704e1695e61--



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]