Rename Amazon Cloud Drive to Amazon Drive - fixes #532

s3-about
Nick Craig-Wood 2016-07-11 12:42:44 +01:00
parent 8c2fc6daf8
commit 56adb52a21
24 changed files with 117 additions and 117 deletions

View File

@ -23,7 +23,7 @@
<li>Openstack Swift / Rackspace cloud files / Memset Memstore</li>
<li>Dropbox</li>
<li>Google Cloud Storage</li>
<li>Amazon Cloud Drive</li>
<li>Amazon Drive</li>
<li>Microsoft One Drive</li>
<li>Hubic</li>
<li>Backblaze B2</li>
@ -79,7 +79,7 @@ sudo mandb</code></pre>
<li><a href="http://rclone.org/dropbox/">Dropbox</a></li>
<li><a href="http://rclone.org/googlecloudstorage/">Google Cloud Storage</a></li>
<li><a href="http://rclone.org/local/">Local filesystem</a></li>
<li><a href="http://rclone.org/amazonclouddrive/">Amazon Cloud Drive</a></li>
<li><a href="http://rclone.org/amazonclouddrive/">Amazon Drive</a></li>
<li><a href="http://rclone.org/b2/">Backblaze B2</a></li>
<li><a href="http://rclone.org/hubic/">Hubic</a></li>
<li><a href="http://rclone.org/onedrive/">Microsoft One Drive</a></li>
@ -676,7 +676,7 @@ file2.jpg</code></pre>
<td align="center">No</td>
</tr>
<tr class="even">
<td align="left">Amazon Cloud Drive</td>
<td align="left">Amazon Drive</td>
<td align="center">MD5</td>
<td align="center">No</td>
<td align="center">Yes</td>
@ -753,7 +753,7 @@ e/n/d/q&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -956,7 +956,7 @@ n/s&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1178,7 +1178,7 @@ n/s&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1282,7 +1282,7 @@ e/n/d/q&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1360,7 +1360,7 @@ e/n/d/q&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1461,7 +1461,7 @@ y/e/d&gt; y</code></pre>
<p>To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the <code>service_account_file</code> prompt and rclone won't use the browser based authentication flow.</p>
<h3 id="modified-time-3">Modified time</h3>
<p>Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the &quot;mtime&quot; key in RFC3339 format accurate to 1ns.</p>
<h2 id="amazon-cloud-drive">Amazon Cloud Drive</h2>
<h2 id="amazon-cloud-drive">Amazon Drive</h2>
<p>Paths are specified as <code>remote:path</code></p>
<p>Paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
<p>The initial setup for Amazon cloud drive involves getting a token from Amazon which you need to do in your browser. <code>rclone config</code> walks you through it.</p>
@ -1475,7 +1475,7 @@ e/n/d/q&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1534,7 +1534,7 @@ y/e/d&gt; y</code></pre>
<h3 id="specific-options-3">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4>
<p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Cloud Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p>
<p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p>
<p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p>
<h3 id="limitations-3">Limitations</h3>
<p>Note that Amazon cloud drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
@ -1556,7 +1556,7 @@ n/s&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1641,7 +1641,7 @@ n/s&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1720,7 +1720,7 @@ n/q&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1802,7 +1802,7 @@ n/s&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph)
\ &quot;s3&quot;
@ -1913,7 +1913,7 @@ nounc = true</code></pre>
<li>Fix retry doing one too many retries</li>
<li>Local</li>
<li>Fix problems with OS X and UTF-8 characters</li>
<li>Amazon Cloud Drive</li>
<li>Amazon Drive</li>
<li>Check a file exists before uploading to help with 408 Conflict errors</li>
<li>Reauth on 401 errors - this has been causing a lot of problems</li>
<li>Work around spurious 403 errors</li>
@ -1999,7 +1999,7 @@ nounc = true</code></pre>
<li>S3</li>
<li>Allow IAM role and credentials from environment variables - thanks Brian Stengaard</li>
<li>Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon</li>
<li>Amazon Cloud Drive</li>
<li>Amazon Drive</li>
<li>Retry on more things to make directory listings more reliable</li>
</ul></li>
<li>v1.27 - 2016-01-31
@ -2015,7 +2015,7 @@ nounc = true</code></pre>
<li>Warn the user about files with same name but different case</li>
<li>Make <code>--include</code> rules add their implict exclude * at the end of the filter list</li>
<li>Deprecate compiling with go1.3</li>
<li>Amazon Cloud Drive</li>
<li>Amazon Drive</li>
<li>Fix download of files &gt; 10 GB</li>
<li>Fix directory traversal (&quot;Next token is expired&quot;) for large directory listings</li>
<li>Remove 409 conflict from error codes we will retry - stops very long pauses</li>
@ -2114,7 +2114,7 @@ nounc = true</code></pre>
<li>Make lsl output times in localtime</li>
<li>Fixes</li>
<li>Fix allowing user to override credentials again in Drive, GCS and ACD</li>
<li>Amazon Cloud Drive</li>
<li>Amazon Drive</li>
<li>Implement compliant pacing scheme</li>
<li>Google Drive</li>
<li>Make directory reads concurrent for increased speed.</li>
@ -2122,7 +2122,7 @@ nounc = true</code></pre>
<li>v1.20 - 2015-09-15
<ul>
<li>New features</li>
<li>Amazon Cloud Drive support</li>
<li>Amazon Drive support</li>
<li>Oauth support redone - fix many bugs and improve usability
<ul>
<li>Use &quot;golang.org/x/oauth2&quot; as oauth libary of choice</li>

View File

@ -14,7 +14,7 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Cloud Drive
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
@ -93,7 +93,7 @@ See the following for detailed instructions for
* [Dropbox](http://rclone.org/dropbox/)
* [Google Cloud Storage](http://rclone.org/googlecloudstorage/)
* [Local filesystem](http://rclone.org/local/)
* [Amazon Cloud Drive](http://rclone.org/amazonclouddrive/)
* [Amazon Drive](http://rclone.org/amazonclouddrive/)
* [Backblaze B2](http://rclone.org/b2/)
* [Hubic](http://rclone.org/hubic/)
* [Microsoft One Drive](http://rclone.org/onedrive/)
@ -1284,7 +1284,7 @@ Here is an overview of the major features of each cloud storage system.
| Openstack Swift | MD5 | Yes | No | No |
| Dropbox | - | No | Yes | No |
| Google Cloud Storage | MD5 | Yes | No | No |
| Amazon Cloud Drive | MD5 | No | Yes | No |
| Amazon Drive | MD5 | No | Yes | No |
| Microsoft One Drive | SHA1 | Yes | Yes | No |
| Hubic | MD5 | Yes | No | No |
| Backblaze B2 | SHA1 | Yes | No | No |
@ -1365,7 +1365,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -1561,7 +1561,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -1845,7 +1845,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -1999,7 +1999,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2121,7 +2121,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2264,7 +2264,7 @@ Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
Amazon Cloud Drive
Amazon Drive
-----------------------------------------
Paths are specified as `remote:path`
@ -2289,7 +2289,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2379,7 +2379,7 @@ system.
#### --acd-templink-threshold=SIZE ####
Files this size or more will be downloaded via their `tempLink`. This
is to work around a problem with Amazon Cloud Drive which blocks
is to work around a problem with Amazon Drive which blocks
downloads of files bigger than about 10GB. The default for this is
9GB which shouldn't need to be changed.
@ -2434,7 +2434,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2577,7 +2577,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2701,7 +2701,7 @@ n/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2851,7 +2851,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -3030,7 +3030,7 @@ Changelog
* Fix retry doing one too many retries
* Local
* Fix problems with OS X and UTF-8 characters
* Amazon Cloud Drive
* Amazon Drive
* Check a file exists before uploading to help with 408 Conflict errors
* Reauth on 401 errors - this has been causing a lot of problems
* Work around spurious 403 errors
@ -3108,7 +3108,7 @@ Changelog
* S3
* Allow IAM role and credentials from environment variables - thanks Brian Stengaard
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
* Amazon Cloud Drive
* Amazon Drive
* Retry on more things to make directory listings more reliable
* v1.27 - 2016-01-31
* New Features
@ -3122,7 +3122,7 @@ Changelog
* Warn the user about files with same name but different case
* Make `--include` rules add their implict exclude * at the end of the filter list
* Deprecate compiling with go1.3
* Amazon Cloud Drive
* Amazon Drive
* Fix download of files > 10 GB
* Fix directory traversal ("Next token is expired") for large directory listings
* Remove 409 conflict from error codes we will retry - stops very long pauses
@ -3205,13 +3205,13 @@ Changelog
* Make lsl output times in localtime
* Fixes
* Fix allowing user to override credentials again in Drive, GCS and ACD
* Amazon Cloud Drive
* Amazon Drive
* Implement compliant pacing scheme
* Google Drive
* Make directory reads concurrent for increased speed.
* v1.20 - 2015-09-15
* New features
* Amazon Cloud Drive support
* Amazon Drive support
* Oauth support redone - fix many bugs and improve usability
* Use "golang.org/x/oauth2" as oauth libary of choice
* Improve oauth usability for smoother initial signup

View File

@ -17,7 +17,7 @@ from
- Openstack Swift / Rackspace cloud files / Memset Memstore
- Dropbox
- Google Cloud Storage
- Amazon Cloud Drive
- Amazon Drive
- Microsoft One Drive
- Hubic
- Backblaze B2
@ -95,7 +95,7 @@ See the following for detailed instructions for
- Dropbox
- Google Cloud Storage
- Local filesystem
- Amazon Cloud Drive
- Amazon Drive
- Backblaze B2
- Hubic
- Microsoft One Drive
@ -1278,7 +1278,7 @@ Here is an overview of the major features of each cloud storage system.
Openstack Swift MD5 Yes No No
Dropbox - No Yes No
Google Cloud Storage MD5 Yes No No
Amazon Cloud Drive MD5 No Yes No
Amazon Drive MD5 No Yes No
Microsoft One Drive SHA1 Yes Yes No
Hubic MD5 Yes No No
Backblaze B2 SHA1 Yes No No
@ -1358,7 +1358,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -1610,7 +1610,7 @@ This will guide you through an interactive setup process.
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -1888,7 +1888,7 @@ This will guide you through an interactive setup process.
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2038,7 +2038,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2158,7 +2158,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2296,7 +2296,7 @@ modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
Amazon Cloud Drive
Amazon Drive
Paths are specified as remote:path
@ -2319,7 +2319,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2408,7 +2408,7 @@ Here are the command line options specific to this cloud storage system.
--acd-templink-threshold=SIZE
Files this size or more will be downloaded via their tempLink. This is
to work around a problem with Amazon Cloud Drive which blocks downloads
to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed.
@ -2462,7 +2462,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2602,7 +2602,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2722,7 +2722,7 @@ which you can get from the b2 control panel.
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -2867,7 +2867,7 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -3044,7 +3044,7 @@ Changelog
- Fix retry doing one too many retries
- Local
- Fix problems with OS X and UTF-8 characters
- Amazon Cloud Drive
- Amazon Drive
- Check a file exists before uploading to help with 408 Conflict
errors
- Reauth on 401 errors - this has been causing a lot of problems
@ -3139,7 +3139,7 @@ Changelog
thanks Brian Stengaard
- Allow low privilege users to use S3 (check if directory exists
during Mkdir) - thanks Jakub Gedeon
- Amazon Cloud Drive
- Amazon Drive
- Retry on more things to make directory listings more reliable
- v1.27 - 2016-01-31
- New Features
@ -3157,7 +3157,7 @@ Changelog
- Make --include rules add their implict exclude * at the end of
the filter list
- Deprecate compiling with go1.3
- Amazon Cloud Drive
- Amazon Drive
- Fix download of files > 10 GB
- Fix directory traversal ("Next token is expired") for large
directory listings
@ -3251,13 +3251,13 @@ Changelog
- Fixes
- Fix allowing user to override credentials again in Drive, GCS
and ACD
- Amazon Cloud Drive
- Amazon Drive
- Implement compliant pacing scheme
- Google Drive
- Make directory reads concurrent for increased speed.
- v1.20 - 2015-09-15
- New features
- Amazon Cloud Drive support
- Amazon Drive support
- Oauth support redone - fix many bugs and improve usability
- Use "golang.org/x/oauth2" as oauth libary of choice
- Improve oauth usability for smoother initial signup

View File

@ -17,7 +17,7 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Cloud Drive
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2

View File

@ -64,7 +64,7 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "amazon cloud drive",
Description: "Amazon Cloud Drive",
Description: "Amazon Drive",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
@ -117,7 +117,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Amazon cloud drive root '%s'", f.root)
return fmt.Sprintf("amazon drive root '%s'", f.root)
}
// Pattern to match a acd path
@ -165,7 +165,7 @@ func NewFs(name, root string) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, acdConfig)
if err != nil {
log.Fatalf("Failed to configure amazon cloud drive: %v", err)
log.Fatalf("Failed to configure Amazon Drive: %v", err)
}
c := acd.NewClient(oAuthClient)

View File

@ -1,6 +1,6 @@
---
title: "Rclone"
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Cloud Drive."
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Drive."
type: page
date: "2015-09-06"
groups: ["about"]
@ -18,7 +18,7 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Cloud Drive
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2

View File

@ -1,17 +1,17 @@
---
title: "Amazon Cloud Drive"
description: "Rclone docs for Amazon Cloud Drive"
date: "2015-09-06"
title: "Amazon Drive"
description: "Rclone docs for Amazon Drive"
date: "2016-07-11"
---
<i class="fa fa-amazon"></i> Amazon Cloud Drive
<i class="fa fa-amazon"></i> Amazon Drive
-----------------------------------------
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
The initial setup for Amazon cloud drive involves getting a token from
The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks
you through it.
@ -29,7 +29,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
@ -84,21 +84,21 @@ you to unblock it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,
List directories in top level of your Amazon cloud drive
List directories in top level of your Amazon Drive
rclone lsd remote:
List all the files in your Amazon cloud drive
List all the files in your Amazon Drive
rclone ls remote:
To copy a local directory to an Amazon cloud drive directory called backup
To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
### Modified time and MD5SUMs ###
Amazon cloud drive doesn't allow modification times to be changed via
Amazon Drive doesn't allow modification times to be changed via
the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the
@ -109,7 +109,7 @@ It does store MD5SUMs so for a more accurate sync, you can use the
Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via
the Amazon cloud drive website.
the Amazon Drive website.
### Specific options ###
@ -119,9 +119,9 @@ system.
#### --acd-templink-threshold=SIZE ####
Files this size or more will be downloaded via their `tempLink`. This
is to work around a problem with Amazon Cloud Drive which blocks
downloads of files bigger than about 10GB. The default for this is
9GB which shouldn't need to be changed.
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a `tempLink`
which downloads the file through a temporary URL directly from the
@ -129,17 +129,17 @@ underlying S3 storage.
### Limitations ###
Note that Amazon cloud drive is case insensitive so you can't have a
Note that Amazon Drive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".
Amazon cloud drive has rate limiting so you may notice errors in the
Amazon Drive has rate limiting so you may notice errors in the
sync (429 errors). rclone will automatically retry the sync up to 3
times by default (see `--retries` flag) which should hopefully work
around this problem.
Amazon cloud drive has an internal limit of file sizes that can be
uploaded to the service. This limit is not officially published,
but all files larger than this will fail.
Amazon Drive has an internal limit of file sizes that can be uploaded
to the service. This limit is not officially published, but all files
larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file.
This means that larger files are likely to fail.
@ -147,4 +147,4 @@ This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
failure. To avoid this problem, use `--max-size=50GB` option to limit
the maximum size of uploaded files.
the maximum size of uploaded files.

View File

@ -28,7 +28,7 @@ n/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -25,7 +25,7 @@ Changelog
* Fix retry doing one too many retries
* Local
* Fix problems with OS X and UTF-8 characters
* Amazon Cloud Drive
* Amazon Drive
* Check a file exists before uploading to help with 408 Conflict errors
* Reauth on 401 errors - this has been causing a lot of problems
* Work around spurious 403 errors
@ -103,7 +103,7 @@ Changelog
* S3
* Allow IAM role and credentials from environment variables - thanks Brian Stengaard
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
* Amazon Cloud Drive
* Amazon Drive
* Retry on more things to make directory listings more reliable
* v1.27 - 2016-01-31
* New Features
@ -117,7 +117,7 @@ Changelog
* Warn the user about files with same name but different case
* Make `--include` rules add their implict exclude * at the end of the filter list
* Deprecate compiling with go1.3
* Amazon Cloud Drive
* Amazon Drive
* Fix download of files > 10 GB
* Fix directory traversal ("Next token is expired") for large directory listings
* Remove 409 conflict from error codes we will retry - stops very long pauses
@ -200,13 +200,13 @@ Changelog
* Make lsl output times in localtime
* Fixes
* Fix allowing user to override credentials again in Drive, GCS and ACD
* Amazon Cloud Drive
* Amazon Drive
* Implement compliant pacing scheme
* Google Drive
* Make directory reads concurrent for increased speed.
* v1.20 - 2015-09-15
* New features
* Amazon Cloud Drive support
* Amazon Drive support
* Oauth support redone - fix many bugs and improve usability
* Use "golang.org/x/oauth2" as oauth libary of choice
* Improve oauth usability for smoother initial signup

View File

@ -25,7 +25,7 @@ See the following for detailed instructions for
* [Dropbox](/dropbox/)
* [Google Cloud Storage](/googlecloudstorage/)
* [Local filesystem](/local/)
* [Amazon Cloud Drive](/amazonclouddrive/)
* [Amazon Drive](/amazonclouddrive/)
* [Backblaze B2](/b2/)
* [Hubic](/hubic/)
* [Microsoft One Drive](/onedrive/)

View File

@ -29,7 +29,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -30,7 +30,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -150,7 +150,7 @@ This would exclude
A similar process is done on directory entries before recursing into
them. This only works on remotes which have a concept of directory
(Eg local, drive, onedrive, amazon cloud drive) and not on bucket
(Eg local, google drive, onedrive, amazon drive) and not on bucket
based remotes (eg s3, swift, google compute storage, b2).
## Adding filtering rules ##

View File

@ -28,7 +28,7 @@ e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -28,7 +28,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -29,7 +29,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -22,7 +22,7 @@ Here is an overview of the major features of each cloud storage system.
| Openstack Swift | MD5 | Yes | No | No |
| Dropbox | - | No | Yes | No |
| Google Cloud Storage | MD5 | Yes | No | No |
| Amazon Cloud Drive | MD5 | No | Yes | No |
| Amazon Drive | MD5 | No | Yes | No |
| Microsoft One Drive | SHA1 | Yes | Yes | No |
| Hubic | MD5 | Yes | No | No |
| Backblaze B2 | SHA1 | Yes | No | No |

View File

@ -24,7 +24,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -30,7 +30,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -25,7 +25,7 @@ n/s> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Cloud Drive
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"

View File

@ -36,7 +36,7 @@
<li><a href="/swift/"><i class="fa fa-space-shuttle"></i> Swift</a></li>
<li><a href="/dropbox/"><i class="fa fa-dropbox"></i> Dropbox</a></li>
<li><a href="/googlecloudstorage/"><i class="fa fa-google"></i> Google Cloud Storage</a></li>
<li><a href="/amazonclouddrive/"><i class="fa fa-amazon"></i> Amazon Cloud Drive</a></li>
<li><a href="/amazonclouddrive/"><i class="fa fa-amazon"></i> Amazon Drive</a></li>
<li><a href="/onedrive/"><i class="fa fa-windows"></i> Microsoft One Drive</a></li>
<li><a href="/hubic/"><i class="fa fa-space-shuttle"></i> Hubic</a></li>
<li><a href="/b2/"><i class="fa fa-fire"></i> Backblaze B2</a></li>

View File

@ -154,7 +154,7 @@ func CheckListingWithPrecision(t *testing.T, f fs.Fs, items []Item, precision ti
}
if len(objs) == len(items) {
// Put an extra sleep in if we did any retries just to make sure it really
// is consistent (here is looking at you Amazon Cloud Drive!)
// is consistent (here is looking at you Amazon Drive!)
if i != 1 {
extraSleep := 5*time.Second + sleep
t.Logf("Sleeping for %v just to make sure", extraSleep)

View File

@ -39,7 +39,7 @@ const (
// above that set with SetMaxSleep.
DefaultPacer = Type(iota)
// AmazonCloudDrivePacer is a specialised pacer for Amazon Cloud Drive
// AmazonCloudDrivePacer is a specialised pacer for Amazon Drive
//
// It implements a truncated exponential backoff strategy with
// randomization. Normally operations are paced at the
@ -238,7 +238,7 @@ func (p *Pacer) defaultPacer(retry bool) {
}
// acdPacer implements a truncated exponential backoff
// strategy with randomization for Amazon Cloud Drive
// strategy with randomization for Amazon Drive
//
// See the description for AmazonCloudDrivePacer
//

View File

@ -20,7 +20,7 @@ Dropbox
.IP \[bu] 2
Google Cloud Storage
.IP \[bu] 2
Amazon Cloud Drive
Amazon Drive
.IP \[bu] 2
Microsoft One Drive
.IP \[bu] 2
@ -135,7 +135,7 @@ Google Cloud Storage (http://rclone.org/googlecloudstorage/)
.IP \[bu] 2
Local filesystem (http://rclone.org/local/)
.IP \[bu] 2
Amazon Cloud Drive (http://rclone.org/amazonclouddrive/)
Amazon Drive (http://rclone.org/amazonclouddrive/)
.IP \[bu] 2
Backblaze B2 (http://rclone.org/b2/)
.IP \[bu] 2
@ -1566,7 +1566,7 @@ T}@T{
No
T}
T{
Amazon Cloud Drive
Amazon Drive
T}@T{
MD5
T}@T{
@ -2840,7 +2840,7 @@ prompt and rclone won\[aq]t use the browser based authentication flow.
Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
.SS Amazon Cloud Drive
.SS Amazon Drive
.PP
Paths are specified as \f[C]remote:path\f[]
.PP
@ -2971,7 +2971,7 @@ Here are the command line options specific to this cloud storage system.
.SS \-\-acd\-templink\-threshold=SIZE
.PP
Files this size or more will be downloaded via their \f[C]tempLink\f[].
This is to work around a problem with Amazon Cloud Drive which blocks
This is to work around a problem with Amazon Drive which blocks
downloads of files bigger than about 10GB.
The default for this is 9GB which shouldn\[aq]t need to be changed.
.PP
@ -3740,7 +3740,7 @@ Local
.IP \[bu] 2
Fix problems with OS X and UTF\-8 characters
.IP \[bu] 2
Amazon Cloud Drive
Amazon Drive
.IP \[bu] 2
Check a file exists before uploading to help with 408 Conflict errors
.IP \[bu] 2
@ -3921,7 +3921,7 @@ Brian Stengaard
Allow low privilege users to use S3 (check if directory exists during
Mkdir) \- thanks Jakub Gedeon
.IP \[bu] 2
Amazon Cloud Drive
Amazon Drive
.IP \[bu] 2
Retry on more things to make directory listings more reliable
.RE
@ -3957,7 +3957,7 @@ of the filter list
.IP \[bu] 2
Deprecate compiling with go1.3
.IP \[bu] 2
Amazon Cloud Drive
Amazon Drive
.IP \[bu] 2
Fix download of files > 10 GB
.IP \[bu] 2
@ -4146,7 +4146,7 @@ Fixes
.IP \[bu] 2
Fix allowing user to override credentials again in Drive, GCS and ACD
.IP \[bu] 2
Amazon Cloud Drive
Amazon Drive
.IP \[bu] 2
Implement compliant pacing scheme
.IP \[bu] 2
@ -4160,7 +4160,7 @@ v1.20 \- 2015\-09\-15
.IP \[bu] 2
New features
.IP \[bu] 2
Amazon Cloud Drive support
Amazon Drive support
.IP \[bu] 2
Oauth support redone \- fix many bugs and improve usability
.RS 2