WebDAV 3.0, okHttp, not uploading

Support for Android version of Total Commander

Moderators: white, Hacker, petermad, Stefan2

Post Reply
mankokoma
Junior Member
Junior Member
Posts: 20
Joined: 2011-06-14, 08:46 UTC

WebDAV 3.0, okHttp, not uploading

Post by *mankokoma »

Non-rooted Android 10 (xiaomi.eu community rom), TC Android 3.01b2, WebDAV 3.0, connection to nextcloud on nginx doesn't upload files, "error writing to target file" after timeout, but you can create folders.
After switching from okHttp back to Apache client everything works as expected again.
Have no idea, how ro extract a logfile from this very restricted OS...? I'm gonna find a way over ADB shell, I think, but have to find spare time for this, sorry for just reporting now.
User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 48021
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Re: WebDAV 3.0, okHttp, not uploading

Post by *ghisler(Author) »

I don't have any problems with Nextcloud uploads here. Therefore I need more information.

To create a log file, please do the following:
1. Switch to the internal SD card by going to the home folder, and then tap on the first entry in the list
2. Create a new folder named .TotalCommander including the dot at the beginning (if it doesn't exist yet)
3. You may need to enable the display of hidden files in the TC configuration if you cannot see that folder.
4. Go inside this folder .TotalCommander
5. Create a new file named log.txt by holding down a finger on the first line (..)
6. Close Total Commander via "X" button and restart it
Author of Total Commander
https://www.ghisler.com
ueffel
Junior Member
Junior Member
Posts: 4
Joined: 2020-07-09, 13:01 UTC

Re: WebDAV 3.0, okHttp, not uploading

Post by *ueffel »

I'm also having issues with the okHttp client.
I'm on Android 9, TC 3.0, WebDAV 3.0

The okHttp client works fine for the HTTP request methods PROPFIND, MKCOL, MOVE, COPY, DELETE.
Only the PUT request method causes a problem and only if used with HTTP 2.0 (HTTP 1.1 plain or encrypted work fine).

I created a log file, here is the relevant part:

Code: Select all

2020-07-09 16:32:08.560 WebDAV:OkHttpClient: new connection to: ue-pc
2020-07-09 16:32:08.625 WebDAV:Cert subject: CN=das-ue-pc.fritz.box,O=Internet Widgits Pty Ltd,ST=Some-State,C=de
2020-07-09 16:32:08.627 WebDAV:Cert issuer: CN=das-ue-pc.fritz.box,O=Internet Widgits Pty Ltd,ST=Some-State,C=de
2020-07-09 16:32:12.238 HOST:WebDAV:get dir: /ue-pc/webdav/
2020-07-09 16:32:12.240 WebDAV:LIST /ue-pc/webdav/
2020-07-09 16:32:12.298 WebDAV:PUT /ue-pc/webdav/bla.txt
2020-07-09 16:32:13.311 WebDAV:PUT /ue-pc/webdav/bla.txt
2020-07-09 16:32:23.319 WebDAV:Exception: timeout
2020-07-09 16:32:23.324 WebDAV:timeout
From what I can gather the okHttp client does not send any request body via HTTP 2.0 to nginx. Nginx times out the client after 10 seconds.

Nginx is set up with ssl, http2 (TLSv1.2/ECDHE-ECDSA-AES128-GCM-SHA256 dont know if that is important) and the following location block:

Code: Select all

        location /webdav/ {
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Port $server_port;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_pass http://127.0.0.1:9004$request_uri;
            client_max_body_size 0;
            # proxy_request_buffering off;
            # proxy_buffering off;
        }
The backend that runs at 127.0.0.1:9004 is written in GO and just puts out an "unexpected EOF" error because the connection is prematurely closed by nginx.
If I disable the request buffering in nginx, so nginx sends everything it gets directly to the backend, the backend does not read any bytes from the request body.

For testing purposes I configured the backend to use HTTP 2.0 with the same ssl certificate and cipher suite (TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS1.2), connected directly to it with TC and there everything works just fine.
So it does not seems to be a general problem with HTTP 2.0, just some quirk with nginx's http2.

Hope you can figure this out.

Edit:
I tested 2 nginx versions:
  • nginx version: nginx/1.17.3
    built by cl 16.00.40219.01 for 80x86
    built with OpenSSL 1.1.1c 28 May 2019
    on Windows 10
  • nginx version: nginx/1.14.1
    built with OpenSSL 1.1.0f 25 May 2017 (running with OpenSSL 1.1.0l 10 Sep 2019)
    on armv7 Linux (raspian)
Edit2:
Upload via curl works fine too.

Code: Select all

$ curl -v --insecure --upload-file bla.txt https://localhost/webdav/bla.txt
*   Trying ::1:443...
* Connected to localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=de; ST=Some-State; O=Internet Widgits Pty Ltd; CN=das-ue-pc.fritz.box
*  start date: Oct 31 00:51:42 2018 GMT
*  expire date: Oct 28 00:51:42 2028 GMT
*  issuer: C=de; ST=Some-State; O=Internet Widgits Pty Ltd; CN=das-ue-pc.fritz.box
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x1ee5816b7a0)
> PUT /webdav/bla.txt HTTP/2
> Host: localhost
> user-agent: curl/7.70.0
> accept: */*
> content-length: 3
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* We are completely uploaded and fine
< HTTP/2 201
< server: nginx/1.17.3
< date: Thu, 09 Jul 2020 16:58:35 GMT
< content-type: text/plain; charset=utf-8
< content-length: 7
< etag: "1620240f386302a83"
< strict-transport-security: max-age=15768000; includeSubdomains; preload
< x-content-type-options: nosniff
<
Created* Connection #0 to host localhost left intact
User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 48021
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Re: WebDAV 3.0, okHttp, not uploading

Post by *ghisler(Author) »

Thanks for the extra info. It seems that your server does not support the header
Expect: 100-continue

My plugin sends this header with the PUT request to get an immediate reply from the server BEFORE sending the data. The server either must reply with an error, or with 101 continue if it will accept the file at that location. Servers which do not support this header would send a 407 error reply.

The problem seems to be that your server doesn't handle the header at all and just waits for the data until timout. The Apache client handles this gracefully by waiting 2 seconds for the intermediate reply, and if there is none, send the body anyway. The okHttp client doesn't support this and just aborts the upload with an error.

Unfortunately the way the Apache client handles this cannot be implemented with okHttp. I have therefore modified my plugin to do the following:
1. If the file uploaded is smaller than 100k, no 100-continue request is sent
2. If the file is larger than 100k, then 100-continue request is sent. After 2 seconds with no reply, TC aborts the connection and sends a new request without the 100-continue request. It then turns off the 100-continue mechanism for all further uploads.

I have uploaded an update to version 3.01 to the play store now. You should receive it withing the next few hours, it's currently a slow process.

Btw, if you want to read more about the 101-continue process, you can do it here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html

According to this RFC, the 100-continue mechanism MUST be implemented by all HTTP 1.1-compliant servers:
Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
Or the newer RFC 7231:
https://tools.ietf.org/html/rfc7231#section-5.1.1
An origin server MUST, upon receiving an HTTP/1.1 (or later)
request-line and a complete header section that contains a
100-continue expectation and indicates a request message body will
follow, either send an immediate response with a final status code,
if that status can be determined by examining just the request-line
and header fields, or send an immediate 100 (Continue) response to
encourage the client to send the request's message body. The origin
server MUST NOT wait for the message body before sending the 100
(Continue) response.
So your server is not following HTTP/1.1 standards.
Author of Total Commander
https://www.ghisler.com
ueffel
Junior Member
Junior Member
Posts: 4
Joined: 2020-07-09, 13:01 UTC

Re: WebDAV 3.0, okHttp, not uploading

Post by *ueffel »

Is the Expect: 100-continue header part of the HTTP/2 Standard?
As I said for HTTP 1.1 everything works fine, including the Expect header. It is just HTTP/2 which does not work.
User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 48021
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Re: WebDAV 3.0, okHttp, not uploading

Post by *ghisler(Author) »

Difficult to say. RFC7540 contains HTTP/1.1 100 Continue-replies, but no explicit request to get them:
https://tools.ietf.org/html/rfc7540

Maybe I should just disable HTTP/2? Uploads without Expect: 100-continue are the horror. Imagine you upload a 100MB file, only to get an access denied or login required error at the end...
Author of Total Commander
https://www.ghisler.com
ueffel
Junior Member
Junior Member
Posts: 4
Joined: 2020-07-09, 13:01 UTC

Re: WebDAV 3.0, okHttp, not uploading

Post by *ueffel »

I think when using HTTP/2 the server side can abort the request by just closing the stream to avoid the unnecessary traffic (but keep the connection at the same time), but I'm not sure about this.

Disabling HTTP/2 seems a reasonable solution to me.
mankokoma
Junior Member
Junior Member
Posts: 20
Joined: 2011-06-14, 08:46 UTC

Re: WebDAV 3.0, okHttp, not uploading

Post by *mankokoma »

Thanks for the update 3.01, @ghisler - uploads are working now again.
@ueffel, thank you very much for the professional analysis! I assumed it could be a problem of restrictive nginx/nextcloud configuration, which I try to make as secure as possible, but just according to tutorials.
I just wanted to start looking through server logs and now find this.
Grateful regards!
User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 48021
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Re: WebDAV 3.0, okHttp, not uploading

Post by *ghisler(Author) »

Nice to hear that it works again! You are right, HTTP/2 seems to handle this differently - it allows to send a reply to the client whenever it wants, even at the beginning of a large file upload, without the need of a 100-continue header. However, OkHTTP does not implement that - it waits for the upload to complete until reading any replies from the server. But the server then drops the connection, so OkHTTP reports a connection error.
Author of Total Commander
https://www.ghisler.com
Post Reply