Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKHv7piLM0a-Gy3wWj-5rqh3bCvv-xxi67DZCbU8hRaHwGYFgw@mail.gmail.com>
Date: Mon, 23 Sep 2013 16:04:00 +0200
From: Paul Schutte <sjpschutte@...il.com>
To: sabotage@...ts.openwall.com
Subject: Re: Installing everything in opt

Hy Christian,

That is actually one work load that would not be impacted.
The toolchain will get cached and there after it will be on par with a
"normal" install.

Workloads that will suffer will be things like launching X-apps with a lot
of shared libraries (anything written in Qt or gtk).
Browser startup time, libre office, mplayer (when we get there one day ;-).

It is already comparable  to launch into lxde (which is currently minimal)
and to bootup Ubuntu and auto login on a full stack of applications on my
machine. It is going to get worse as we add more apps.

Synthetic benchmarks won't show this either as repetitive IO will get
cached.
It will be real world experience that will be bad as not so frequently used
apps will always grind your HDD.
Every time you click on something, it will be sluggish. You will be driven
to getting an SSD.

Using hardlinks in the place of the current symlinks will actually work
very well. There would only be a small penalty because probability of files
being in different allocation groups (ext2,ext3,ext4,xfs and possible
others) are higher than when created "normally".

What would be interesting to me would be to have some sort of shootout
between a sym-linked and hard-linked system and compare startup times of
the apps and the general "feel" of the system.

We can probably use something like bootchart and compare them booting into
X with a desktop and a few apps.


How difficult would it be to change it to use hardlinks instead ?

Regards
Paul


On Mon, Sep 23, 2013 at 2:36 PM, Christian Neukirchen <
chneukirchen@...il.com> wrote:

> Paul Schutte <sjpschutte@...il.com> writes:
>
> > Hi Guys,
> >
> > I apologize in advance if I step on someone toes with this post.
> >
> > Performance wise it is very bad to put everything in /opt and using
> > symlinks.
> >
> > lets look at an example of a binary called test1 that uses three dynamic
> > libraries.
> >
> > In a "normal" installation it will go something like this:
> >
> >
> dirlookup(/bin)->inodelookup(/bin)->dirlookup(test1)->inodelookup(test1)->load(test1)
> >
> >
> > In the "symlink" installation it will go something like this:
> >
> >
> dirlookup(/bin)->inodelookup(/bin)->dirlookup(test1)->inodelookup(test1)->readsymlink(test1)
> >
> ->dirlookup(/opt)->inodelookup(/opt)->dirlookup(test1dir)->inodelookup(test1dir)
> >
> ->dirlookup(/bin)->inodelookup(/bin)->->dirlookup(test1)->inodelookup(test1)
> >
> > 5 operations vs 13 operations.
> >
> > If we take into account the 3 libraries we are at 20 ops vs 52.
> >
> > If we asume SATA with 8ms average seek, this will be 0.16s vs 0.416s seek
> > time for the same binary.
> >
> > One might argue that the meta data will be cached and therefore the
> penalty
> > is not that bad.
>
> It would be interesting to benchmark the actual performance loss,
> e.g. of a kernel build with toolchain in /usr vs symlinked.
>
> --
> Christian Neukirchen  <chneukirchen@...il.com>  http://chneukirchen.org
>

Content of type "text/html" skipped

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.