Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240819181412.tVrtAtA9@steffen%sdaoden.eu>
Date: Mon, 19 Aug 2024 20:14:12 +0200
From: Steffen Nurpmeso <steffen@...oden.eu>
To: oss-security@...ts.openwall.com
Subject: Re: feedback requested regarding deprecation
 of TLS 1.0/1.1

Jacob Bachmeyer wrote in
 <66C2ACB0.2040203@...il.com>:
 |Peter Gutmann wrote:
 |> Jacob Bachmeyer <jcb62281@...il.com> writes:
 |>> The AtE mode has problems, but is still supported in TLS1.2.  (Why \
 |>> was EtA
 |>> not also introduced in TLS1.2?)
 |>>     
 |> It was:
 |>
 |> https://datatracker.ietf.org/doc/html/rfc7366
 |>
 |> So you don't need any new modes, just an extension to signal its \
 |> presence and
 |> swapping the order of the processing operations if present.
 |
 |I see.  TLS1.2 supports *both* AtE and EtA.
 |
 |My question (here expressed yet another way) still stands unanswered:  
 |excluding cipher suites (and the use of concatenated SHA1+MD5) what, if 
 |any, parts of TLS1.0/1.1 are not also required to implement TLS1.2?
 |
 |Removing support for TLS1.0/1.1 has definite costs in compatibility, 
 |costs that hit particularly hard with legacy embedded devices for which 
 |no updates will be available.  Given that TLS1.2 is to remain supported, 
 |what benefits to maintainability are to be had?  How much of the 
 |TLS1.0/1.1 support does *not* overlap with the TLS1.2 support?  How much 
 |of it *should* overlap if the code were to be optimally refactored?

A primitive view on that is

  # git grep -i tls_v master -- \
      crypto lib include engines ssl apps dev exporters external providers |
    wc -l
  72

but you must cut things like

  master:crypto/evp/e_aes_cbc_hmac_sha256.c:        else if (key->aux.tls_ver >= TLS1_1_VERSION)
  master:providers/implementations/ciphers/cipher_aes_cbc_hmac_sha1_hw.c:        else if (ctx->aux.tls_ver >= TLS1_1_VERSION)

(-E 'TLS.+_VERSION' isn't that much better, for the remove party.)

My own gut feeling says btw that no logical argument of whoever
can change anything on this topic, there is a pimple to go, the
cancel culture requires a victim, the next announcement (ie web
page to which the otherwise hollow email message points) shall
have an effective advertising-niveau-compatible entry.

Ie, it seems for OpenSSL TLSv1 all share the same implementation:

  git show master:ssl/record/methods/tls1_meth.c
says
  /* TLSv1.0, TLSv1.1 and TLSv1.2 all use the same funcs */
and also
        /* For TLSv1.1 and later explicit IV */

and

  git show master:ssl/t1_lib.c

shows only flag differences in between

  SSL3_ENC_METHOD const TLSv1_1_enc_data = {
and
  SSL3_ENC_METHOD const TLSv1_2_enc_data = {

in particular 0 vs

    SSL_ENC_FLAG_SIGALGS | SSL_ENC_FLAG_SHA256_PRF
        | SSL_ENC_FLAG_TLS1_2_CIPHERS,

So it *could* be "removing support of TLSv1.0 and v1.1" is in fact
only a small mostly housekeeping diff at first.

By the way i found your question on additional aka redundant
checksums (wherever) very interesting, given that Antonio
Diaz Diaz (author of plzip plus support libraries) swears on
CRC-32 for long time storage, with absolutely impressive numbers
on reliability (i think here: [1])

  [1] https://www.nongnu.org/lzip/safety_of_the_lzip_format.html#lzma_crc

(Ie, i asked for maybe xxhash support or what, eh, as it was
a public list:

  > While CRC-32 is ok, i guess people (including me) doubt its
  > viability for long-term archiving, especially when compared with
  > other algorithms.  It is not so terrible as years ago, since most
  > people surely have lots of copies, and the filesystems use
  > checksumming.  But as a standalone archive, CRC-32 fails badly,
  > for example smhash says "insecure, 8590x collisions, distrib,
  > PerlinNoise":

  The tests performed by smhasher are 100% unrelated to error detection in a
  decompressor context. CRC32 is probably optimal to detect errors in lzip
  members. See
  http://www.nongnu.org/lzip/manual/lzip_manual.html#Quality-assurance

  "Lzip, like gzip and bzip2, uses a CRC32 to check the integrity of the
  decompressed data because it provides optimal accuracy in the detection of
  errors up to a compressed size of about 16 GiB, a size larger than that of
  most files. In the case of lzip, the additional detection capability of the
  decompressor reduces the probability of undetected errors several million
  times more, resulting in a combined integrity checking optimally accurate
  for any member size produced by lzip."

  See also http://www.nongnu.org/lzip/safety_of_the_lzip_format.html#lzma_crc
  '4.1 Interaction between LZMA compression and CRC32' and '7 Conclusions':

  "After 14 years of testing, the MTBF of lzip can only be estimated because
  not even one false negative has ever been observed. If one were to
  continuously decompress corrupt lzip files of about one megabyte in size (10
  decompressions per second), each of them containing the kind of corruption
  most difficult to detect (one random bit flip), then a false negative would
  be expected to happen every 694 million years."

The linked site claims (and that thus likely means: announces
facts) that the error detection is "good enough for avionics", if
i recall correctly.)

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)

Powered by blists - more mailing lists

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.