Compare commits

...

145 Commits

Author SHA1 Message Date
Daniel García 175f2aeace
Merge pull request #1270 from BlackDex/update-ci
Updated Github Actions, Fixed Dockerfile
2020-12-17 18:22:46 +01:00
BlackDex feefe69094 Updated Github Actions, Fixed Dockerfile
- Updated the Github Actions to build just one binary with all DB
  Backends.

- Created a hadolint workflow to check and verify Dockerfiles.
- Fixed current hadolint errors.
- Fixed a bug in the Dockerfile.j2 which prevented the correct libraries
  and tools to be installed on the Alpine images.

- Deleted travis.yml since that is not used anymore
2020-12-16 19:31:39 +01:00
Daniel García 46df3ee7cd
Updated insecure ws dependency and general dep updates 2020-12-15 22:23:12 +01:00
Daniel García bb945ad01b
Merge pull request #1243 from BlackDex/fix-key-rotate
Fix Key Rotation during password change.
2020-12-14 20:56:31 +01:00
BlackDex de86aa671e Fix Key Rotation during password change
When ticking the 'Also rotate my account's encryption key' box, the key
rotated ciphers are posted after the change of password.

During the password change the security stamp was reseted which made
the posted key's return an invalid auth. This reset is needed to prevent other clients from still being able to read/write.

This fixes this by adding a new database column which stores a stamp exception which includes the allowed route and the current security stamp before it gets reseted.
When the security stamp check fails it will check if there is a stamp exception and tries to match the route and security stamp.

Currently it only allows for one exception. But if needed we could expand it by using a Vec<UserStampException> and change the functions accordingly.

fixes #1240
2020-12-14 19:58:23 +01:00
Daniel García e38771bbbd
Merge pull request #1267 from jjlin/datetime-cleanup
Clean up datetime output and code
2020-12-14 18:36:39 +01:00
Daniel García a3f9a8d7dc
Merge pull request #1265 from jjlin/cipher-rev-date
Fix stale data check failure when cloning a cipher
2020-12-14 18:35:17 +01:00
Daniel García 4b6bc6ef66
Merge pull request #1266 from BlackDex/icon-user-agent
Small update on favicon downloading
2020-12-14 18:34:07 +01:00
Jeremy Lin 455a23361f Clean up datetime output and code
* For clarity, add `UTC` suffix for datetimes in the `Diagnostics` admin tab.
* Format datetimes in the local timezone in the `Users` admin tab.
* Refactor some datetime code and add doc comments.
2020-12-13 19:49:22 -08:00
BlackDex 1a8ec04733 Small update on favicon downloading
- Changed the user-agent, which caused at least one site to stall the
  connection (Same happens on icons.bitwarden.com)
- Added default_header creation to the lazy static CLIENT
- Added referer passing, which is checked by some sites
- Some small other changes
2020-12-10 23:13:24 +01:00
Jeremy Lin 4e60df7a08 Fix stale data check failure when cloning a cipher 2020-12-10 00:17:34 -08:00
Daniel García 219a9d9f5e
Merge pull request #1262 from BlackDex/icon-fixes
Updated icon downloading.
2020-12-08 18:05:05 +01:00
BlackDex 48baf723a4 Updated icon downloading
- Added more checks to prevent panics (Removed unwrap)
- Try do download from base domain or add www when the provided domain
  fails
- Added some more domain validation checks to prevent errors
- Added the ICON_BLACKLIST_REGEX to a Lazy Static HashMap which
  speeds-up the checks!
- Validate the Regex before starting/config change.
- Some cleanups
- Disabled some noisy debugging from 2 crates.
2020-12-08 17:34:18 +01:00
Daniel García 6530904883
Update web vault version to 2.17.1 2020-12-08 16:43:19 +01:00
Daniel García d15d24f4ff
Merge pull request #1242 from BlackDex/allow-manager-role
Adding Manager Role support
2020-12-08 16:11:55 +01:00
Daniel García 8d992d637e
Merge pull request #1257 from jjlin/cipher-rev-date
Validate cipher updates with revision date
2020-12-08 15:59:21 +01:00
Daniel García 6ebc83c3b7
Merge pull request #1247 from janost/admin-disable-user
Implement admin ability to enable/disable users
2020-12-08 15:43:56 +01:00
Daniel García b32f4451ee
Merge branch 'master' into admin-disable-user 2020-12-08 15:42:37 +01:00
Daniel García 99142c7552
Merge pull request #1252 from BlackDex/update-dependencies-20201203
Updated dependencies and Dockerfiles
2020-12-08 15:33:41 +01:00
Daniel García db710bb931
Merge pull request #1245 from janost/user-last-login
Show last active it on admin users page
2020-12-08 15:31:25 +01:00
Jeremy Lin a9e9a397d8 Validate cipher updates with revision date
Prevent clients from updating a cipher if the local copy is stale.
Validation is only performed when the client provides its last known
revision date; this date isn't provided when using older clients,
or when the operation doesn't involve updating an existing cipher.

Upstream PR: https://github.com/bitwarden/server/pull/994
2020-12-07 19:34:00 -08:00
BlackDex d46a6ac687 Updated dependencies and Dockerfiles
- Updated crates
- Updated rust-toolchain
- Updated Dockerfile to use latest rust 1.48 version
- Updated AMD64 Alpine to use same version as rust-toolchain and support
  PostgreSQL.
- Updated Rocket to the commit right before they updated hyper.
  Until that update there were some crates updated and some small fixes.
  After that build fails and we probably need to make some changes
(which is probably something already done in the async branch)
2020-12-04 13:38:42 +01:00
janost 1eb5495802 Show latest active device as last active on admin page 2020-12-03 17:07:32 +01:00
BlackDex 7cf8809d77 Adding Manager Role support
This has been requested a few times (#1136 & #246 & forum), and there already were two
(1:1 duplicate) PR's (#1222 & #1223) which needed some changes and no
followups or further comments unfortunally.

This PR adds two auth headers.
- ManagerHeaders
  Checks if the user-type is Manager or higher and if the manager is
part of that collection or not.
- ManagerHeadersLoose
  Check if the user-type is Manager or higher, but does not check if the
user is part of the collection, needed for a few features like
retreiving all the users of an org.

I think this is the safest way to implement this instead of having to
check this within every function which needs this manually.

Also some extra checks if a manager has access to all collections or
just a selection.

fixes #1136
2020-12-02 22:50:51 +01:00
janost 043aa27aa3 Implement admin ability to enable/disable users 2020-11-30 23:12:56 +01:00
Daniel García 9824d94a1c
Merge pull request #1244 from janost/read-config-from-files
Read config vars from files
2020-11-29 15:28:13 +01:00
janost e8ef76b8f9 Read config vars from files 2020-11-29 02:31:49 +01:00
Daniel García be1ddb4203
Merge pull request #1234 from janost/fix-failed-auth-log
Log proper namespace in the err!() macro
2020-11-27 18:49:46 +01:00
janost caddf21fca Log proper namespace in the err!() macro 2020-11-22 00:09:45 +01:00
Daniel García 5379329ef7
Merge pull request #1229 from BlackDex/email-fixes
Email fixes
2020-11-18 16:16:27 +01:00
BlackDex 6faaeaae66 Updated email processing.
- Added an option to enable smtp debugging via SMTP_DEBUG. This will
  trigger a trace of the smtp commands sent/received to/from the mail
server. Useful when troubleshooting.
- Added two options to ignore invalid certificates which either do not
  match at all, or only doesn't match the hostname.
- Updated lettre to the latest alpha.4 version.
2020-11-18 12:07:08 +01:00
BlackDex 3fed323385 Fixed plain/text email format
plain/text emails should not contain html elements like <p> <a> etc..
This triggers some spamfilters and increases the spam score.

Also added the github link into the text only emails since this also
triggers spamfilters to increase the score since the url/link count is
different between the multipart messages.
2020-11-18 12:04:16 +01:00
BlackDex 58a928547d Updated admin settings page.
- Added check if settings are changed but not saved when sending test
  email.
- Added some styling to emphasize some risks settings.
- Fixed alignment of elements when the label has multiple lines.
2020-11-18 12:00:25 +01:00
Daniel García 558410c5bd
Merge pull request #1220 from jameshurst/master
Return 404 instead of fallback icon
2020-11-14 14:17:53 +01:00
Daniel García 0dc0decaa7
Merge pull request #1212 from BlackDex/dotenv-warnings
Added error handling during dotenv loading
2020-11-14 14:11:56 +01:00
BlackDex d11d663c5c Added error handling during dotenv loading
Some issue people report are because of misconfiguration or bad .env
files. To mittigate this i added error handling for this.

- Panic/Quit on a LineParse error, which indicates bad .env file format.
- Emits a info message when there is no .env file found.
- Emits a warning message when there is a .env file, but not no
  permissions.
- Emits a warning on every other message not specifically catched.
2020-11-12 13:40:26 +01:00
James Hurst 771233176f Fix for negcached icons 2020-11-09 22:06:11 -05:00
James Hurst ed70b07d81 Return 404 instead of fallback icon 2020-11-09 20:47:26 -05:00
Daniel García e25fc7083d
Merge pull request #1219 from aveao/master
Ensure that a user is actually in an org when applying policies
2020-11-07 23:29:12 +01:00
Ave fa364c3f2c
Ensure that a user is actually in an org when applying policies 2020-11-08 01:14:17 +03:00
Daniel García b5f9fe4d3b
Fix #1206 2020-11-07 23:03:02 +01:00
Daniel García 013d4c28b2
Try to fix #1218 2020-11-07 23:01:56 +01:00
Daniel García 63acc8619b
Update dependencies 2020-11-07 23:01:04 +01:00
Daniel García ec920b5756
Merge pull request #1199 from jjlin/delete-admin
Add missing admin endpoints for deleting ciphers
2020-10-23 14:18:41 +02:00
Jeremy Lin 95caaf2a40 Add missing admin endpoints for deleting ciphers
This fixes the inability to bulk-delete ciphers from org vault views.
2020-10-23 03:42:22 -07:00
Mathijs van Veluw 7099f8bee8
Merge pull request #1198 from fabianvansteen/patch-1
Correction of verify_email error message
2020-10-23 11:40:39 +02:00
Fabian van Steen b41a0d840c
Correction of verify_email error message 2020-10-23 10:30:25 +02:00
Daniel García c577ade90e
Updated dependencies 2020-10-15 23:44:35 +02:00
Daniel García 257b143df1
Remove some duplicate code in Dockerfile with the help of some variables 2020-10-11 17:27:15 +02:00
Daniel García 34ee326ce9
Merge pull request #1178 from BlackDex/update-azure-pipelines
Updated the azure-pipelines.yml for multidb
2020-10-11 17:25:15 +02:00
Daniel García 090104ce1b
Merge pull request #1181 from BlackDex/update-issue-template
Updated bug-report to note to update first
2020-10-11 17:24:42 +02:00
BlackDex 3305d5dc92 Updated bug-report to note to update first 2020-10-11 15:58:31 +02:00
Daniel García 296063e135
Merge pull request #1108 from rfwatson/add-db-connections-setting
Add DATABASE_MAX_CONNS config setting
2020-10-10 00:25:03 +02:00
Rob Watson b9daa59e5d Add DATABASE_MAX_CONNS config setting 2020-10-09 10:29:02 +02:00
BlackDex 5bdcfe128d Updated the azure-pipelines.yml for multidb
Updated the azure-pipelines.yml to build multidb now.
- Updated to Ubuntu 18.04 (Closer matches the docker builds)
- Added some really needed apt packages to be sure that they are
  installed
- Now run cargo test using all database backeds in one go.
2020-10-08 18:48:05 +02:00
Daniel García 1842a796fb
Merge pull request #1176 from BlackDex/fix-alpine-arm-docker
Fixed issue with building Alpine armv7 image.
2020-10-08 16:05:03 +02:00
BlackDex ce99e5c583 Fixed issue with building Alpine armv7 image.
The runtime image was using a very old Alpine version.
This caused issues with the catatonit install

Now using the Balena armv7hf Alpine image for this.
2020-10-08 13:12:58 +02:00
Daniel García 0c96c2d305
Merge pull request #1171 from BlackDex/arm-multidb
Fixed building mysql, postgresql and sqlite3 for arm
2020-10-07 23:20:16 +02:00
Daniel García 5796b6b554
Merge pull request #1172 from BlackDex/missing-dotenv-config
Updated .env.template with missing config values.
2020-10-07 23:19:38 +02:00
BlackDex c7ab27c86f Updated .env.template with missing config values.
Several values were missing from the .env.template file.
Updated this with all the values which were missing.
2020-10-06 18:54:21 +02:00
BlackDex 8c03746a67 Fixed building mysql, postgresql and sqlite3 for arm
With some apt/dpkg magic building multidb containers for arm versions
now also works. As long as the build stage and docker-image stage use
the same base (debian buster now) it should all work.

Resolves #530, resolves #1066
2020-10-06 18:04:53 +02:00
Daniel García 8746d36845
Document database connection retries and change alpine repo for catatonit
(cherry picked from commit 88e3835050c0418c060c8e3a704894763ee33aa0)
2020-10-04 14:14:26 +02:00
Daniel García 448e6ac917
Invalidate sessions when changing password or kdf values 2020-10-03 22:43:13 +02:00
Daniel García 729c9cff41
Retry initial db connection, with adjustable option 2020-10-03 22:32:00 +02:00
Daniel García 22b9c80007
Reorganize dockerfile template slightly (same result) 2020-10-03 20:59:48 +02:00
Daniel García ab4355cfed
Updated web vault, dependencies and base docker images 2020-10-03 20:50:13 +02:00
Daniel García 948dc82228
Updated sponsors 2020-10-03 20:48:02 +02:00
Daniel García bc74fd23e7
Merge pull request #1151 from Start9Labs/feature/alpine-arm
alpine arm dockerfile
2020-10-03 20:47:42 +02:00
Daniel García 37776241be
Merge pull request #1150 from BlackDex/mariadb-fk-issues
Fixed foreign-key (mariadb) errors.
2020-09-27 16:24:34 +02:00
Daniel García feba41ec88
Merge pull request #1160 from eduardosm/vendored_openssl
Add `vendored_openssl` feature.
2020-09-27 16:24:11 +02:00
Aiden McClelland 6a8f42da8a
specify version of cmosh's alpine-arm
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2020-09-26 14:03:20 -06:00
Aiden McClelland 670d8cb83a
add arm target to alpine container 2020-09-26 14:02:47 -06:00
Eduardo Sánchez Muñoz 2f7fbde789 Add `vendored_openssl` feature.
This feature enables the `vendored` feature from the `openssl` crate and build a statically linked version of openssl.
2020-09-25 23:25:53 +02:00
Mathijs van Veluw c698bca2b9
Merge branch 'master' into mariadb-fk-issues 2020-09-25 22:25:57 +02:00
Daniel García 0b6a003a8b
Merge pull request #1158 from BlackDex/fix-issue-1156
Add /api/accounts/verify-password endpoint
2020-09-25 21:14:25 +02:00
BlackDex c64560016e Add /api/accounts/verify-password endpoint
If for some reason the hashed password is cleared from memory within a
bitwarden client it will try to verify the password at the server side.

This endpoint was missing.

Resolves #1156
2020-09-25 18:26:48 +02:00
BlackDex 978be0b4a9 Fixed foreign-key (mariadb) errors.
When using MariaDB v10.5+ Foreign-Key errors were popping up because of
some changes in that version. To mitigate this on MariaDB and other
MySQL forks those errors are now catched, and instead of a replace_into
an update will happen. I have tested this as thorough as possible with
MariaDB 10.5, 10.4, 10.3 and the default MySQL on Ubuntu Focal. And
tested it again using sqlite, all seems to be ok on all tables.

resolves #1081. resolves #1065, resolves #1050
2020-09-22 12:13:02 +02:00
Aiden McClelland b58bff1178 alpine arm building successfully 2020-09-21 16:39:39 -06:00
Daniel García 2f3e18caa9
Merge pull request #1146 from BlackDex/user-orgs-table-enhancement
Enhanced user and orgs tables in admin view.
2020-09-20 16:48:19 +02:00
BlackDex 6a291040bd As requested here: https://bitwardenrs.discourse.group/t/searchable-user-list-on-admin-panel/299
- Changed the table layout a bit.
- Added functions to the tables:
  + Search
  + Sort
  + Paginate
2020-09-19 22:19:55 +02:00
Daniel García dbc082dc75
Update web vault to 2.16.0 and dependencies 2020-09-19 22:01:14 +02:00
Daniel García 32a0dd09bf
Merge pull request #1148 from BlackDex/update-config-descriptions
Updated the config options descriptions.
2020-09-19 21:38:01 +02:00
BlackDex f847c6e225 Updated the config options descriptions.
Made some small changes to the description of the config options for
SMTP. Some were a bit cryptic and missing some extra descriptions.

Also made it more clear which type of secured smtp connection is going
to used.
2020-09-19 17:09:58 +02:00
Daniel García 99da5fbebb
Merge pull request #1143 from BlackDex/better-lettre-errors
Format some common Lettre errors a bit simpler
2020-09-14 22:18:47 +02:00
BlackDex 6a0d024c69 Format some common Lettre errors a bit simpler
Currently when for example using the admin interface to send out a test e-mail just
returns `SmtpError`. This is not very helpful. What i have done.

- Match some common Lettre errors to return the error message.
- Other errors will just be passed on as before.

Some small other changes:
- Fixed a clippy warning about using clone().
- Fixed a typo where Lettere was spelled with one t.
2020-09-14 20:47:46 +02:00
Daniel García b24929a243
Merge pull request #1141 from BlackDex/fix-org-creation
Fixed creating a new organization
2020-09-14 18:01:09 +02:00
BlackDex 9a47821642 Fixed creating a new organization
- The new web-vault needs a new api endpoint.
- Added this new endpoint.

Fixes #1139
2020-09-14 08:34:17 +02:00
Daniel García d69968313b
Merge pull request #1140 from jjlin/UserOrgType-cmp
Simplify implementation of `UserOrgType::cmp()`
2020-09-13 15:10:54 +02:00
Daniel García 3c377d97dc
Merge pull request #1137 from BlackDex/smtp-multi-auth-mechanism
Allow multiple SMTP Auth meganisms.
2020-09-13 15:09:58 +02:00
Daniel García ea15218197
Merge pull request #1135 from BlackDex/update-mail
Updated lettre (and other crates) and workflow.
2020-09-13 15:08:01 +02:00
Jeremy Lin 0eee907c88 Simplify implementation of `UserOrgType::cmp()`
Also move `UserOrgType::from_str()` closer to the definition of `UserOrgType`
since it references specific enum values.
2020-09-13 02:03:16 -07:00
BlackDex c877583979 Allow multiple SMTP Auth meganisms.
- Allow all SMTP Auth meganisms supported by Lettre.
- The config value order is leading and values can be separated by a
  comma ','
- Case doesn't matter, and invalid values are ignored.
- Warning is printed when no valid value is found at all.
2020-09-12 21:47:24 +02:00
BlackDex 844cf70345 Updated lettre (and other crates) and workflow.
General:
- Updated several dependancies

Lettre:
- Updateded lettere and the workflow
- Changed encoding to base64
- Convert unix newlines to dos newlines for e-mails.
- Created custom e-mail boundary (auto generated could cause errors)

Tested the e-mails sent using several clients (Linux, Windows, MacOS, Web).
Run msglint (https://tools.ietf.org/tools/msglint/) on the generated e-mails until all errors were gone.

Lettre has changed quite some stuff compared between alpha.1 and alpha.2, i haven't noticed any issues sending e-mails during my tests.
2020-09-11 23:52:20 +02:00
Daniel García a0d92a167c
Merge pull request #1125 from jjlin/org-cipher-visibility
Hide ciphers from non-selected collections for org owners/admins
2020-09-10 23:19:14 +02:00
Daniel García d7b0d6f9f5
Merge pull request #1124 from aaxdev/fix/support-mobile-signalr-msgpack
Fixing MsgPack headers type and support mobile SignalR
2020-09-03 19:51:04 +02:00
Jeremy Lin 4c3b328aca Hide ciphers from non-selected collections for org owners/admins
If org owners/admins set their org access to only include selected
collections, then ciphers from non-selected collections shouldn't
appear in "My Vault". This matches the upstream behavior.
2020-09-01 02:20:25 -07:00
aaxdev 260ffee093 Improving code 2020-08-31 22:20:21 +02:00
aaxdev c59cfe3371 Fix MsgPack headers and support mobile SignalR 2020-08-31 19:05:07 +02:00
Daniel García 0822c0c128
Update admin page dependencies 2020-08-31 16:40:21 +02:00
Daniel García 57a88f0a1b
Merge pull request #1119 from mqus/clarify-multiuser-docs
Clarify multiuser capabilities
2020-08-30 15:44:36 +02:00
Markus Richter 87393409f9
Clean up misunderstandings by removing the single-user point 2020-08-29 12:47:16 +02:00
Daniel García 062f5e4712
Update dependencies 2020-08-28 22:11:55 +02:00
Daniel García aaba1e8368
Fix some clippy warnings and remove unused function 2020-08-28 22:10:28 +02:00
Daniel García ff2684dfee
Merge pull request #1116 from jjlin/docker
Fix the Alpine build
2020-08-27 12:34:36 +02:00
Jeremy Lin 6b5fa201aa Fix the Alpine build 2020-08-26 23:44:34 -07:00
Daniel García 7167e443ca
Merge pull request #1115 from jjlin/favorites
Delete associated favorites when deleting a cipher or user
2020-08-26 17:54:50 +02:00
Jeremy Lin 175d647e47 Delete associated favorites when deleting a cipher or user
This prevents foreign key constraint violations.
2020-08-26 01:27:38 -07:00
Daniel García 4c324e1160
Change Dockerfiles to make the AMD image multidb 2020-08-24 20:58:00 +02:00
Daniel García 0365b7c6a4
Add support for multiple simultaneous database features by using macros.
Diesel requires the following changes:
- Separate connection and pool types per connection, the generate_connections! macro generates an enum with a variant per db type
- Separate migrations and schemas, these were always imported as one type depending on db feature, now they are all imported under different module names
- Separate model objects per connection, the db_object! macro generates one object for each connection with the diesel macros, a generic object, and methods to convert between the connection-specific and the generic ones
- Separate connection queries, the db_run! macro allows writing only one that gets compiled for all databases or multiple ones
2020-08-24 20:11:17 +02:00
Daniel García 19889187a5
Merge pull request #1106 from jjlin/favorites
Track favorites on a per-user basis
2020-08-24 00:31:48 +02:00
Daniel García 9571277c44
Merge pull request #1112 from jjlin/token-size-docs
Add more docs on the `email_token_size` setting
2020-08-24 00:25:47 +02:00
Daniel García a202da9e23
Merge pull request #1099 from jjlin/global-domains
Sync global_domains.json with upstream
2020-08-24 00:25:33 +02:00
Daniel García e5a77a477d
Merge pull request #1111 from jjlin/token
Generate tokens more simply and uniformly
2020-08-24 00:25:12 +02:00
Jeremy Lin c05dc50f53 Add more docs on the `email_token_size` setting 2020-08-22 17:35:55 -07:00
Jeremy Lin 3bbdbb832c Transfer favorite status for user-owned ciphers 2020-08-22 17:14:05 -07:00
Jeremy Lin d9684bef6b Generate tokens more simply and uniformly 2020-08-22 16:07:53 -07:00
Jeremy Lin db0c45c172 Sync global_domains.json to bitwarden/server@8383a08 (Yandex) 2020-08-20 03:31:21 -07:00
Jeremy Lin ad4393e3f7 Sync global_domains.json to bitwarden/server@80f57d2 (Amazon updates) 2020-08-20 03:30:39 -07:00
Jeremy Lin f83a8a36d1 Track favorites on a per-user basis
Currently, favorites are tracked at the cipher level. For org-owned ciphers,
this means that if one user sets it as a favorite, it automatically becomes a
favorite for all other users that the cipher has been shared with.
2020-08-19 02:32:58 -07:00
Jeremy Lin 0e9eba8c8b Maximize similarity between MySQL and SQLite/PostgreSQL schemas
In particular, Diesel aliases `Varchar` to `Text`, and `Blob` to `Binary`:

* https://docs.diesel.rs/diesel/sql_types/struct.Text.html
* https://docs.diesel.rs/diesel/sql_types/struct.Binary.html
2020-08-19 02:32:56 -07:00
Jeremy Lin d5c760960a Sync global_domains.json to bitwarden/server@af85e17 (eBay India updates) 2020-08-19 00:40:59 -07:00
Jeremy Lin 2c6ef2bc68 Sync global_domains.json to bitwarden/server@2c43019 (eBay updates) 2020-08-15 01:34:12 -07:00
Jeremy Lin 7032ae5587 Sync global_domains.json to bitwarden/server@6aed80a (Amazon updates) 2020-08-15 01:32:56 -07:00
Daniel García eba22c2d94
Merge pull request #1095 from jjlin/db-docs
Add more doc comments for MySQL/PostgreSQL connection URIs
2020-08-13 22:34:46 +02:00
Daniel García 11cc9ae0c0
Merge pull request #1094 from jjlin/master
Sync global_domains.json to bitwarden/server@61b11e3
2020-08-13 22:34:20 +02:00
Daniel García fb648db47d
Merge pull request #1096 from mqus/mqus-env-document-override
Add a note that settings in .env can be overridden
2020-08-13 22:34:03 +02:00
mqus 959283d333
Add a note that settings in .env can be overridden 2020-08-13 17:49:25 +02:00
Jeremy Lin 385c2227e7 Add more doc comments for MySQL/PostgreSQL connection URIs
Note that Diesel implements its own parser for MySQL connection URIs, so it
probably doesn't accept the full range of syntax that would be accepted by
MySQL's client libraries, whereas for PostgreSQL, Diesel simply passes the
connection string/URI to PostgreSQL's libpq for processing.
2020-08-13 02:33:22 -07:00
Jeremy Lin 6d9f03e84b Sync global_domains.json to bitwarden/server@61b11e3 2020-08-12 21:10:31 -07:00
Daniel García 6a972e4b19
Make the admin URL redirect try to use the referrer first, and use /admin when DOMAIN is not configured and the referrer check doesn't work, to allow users without DOMAIN configured to use the admin page correctly 2020-08-12 19:07:52 +02:00
Daniel García 171b174ce9
Update dependencies 2020-08-12 18:46:28 +02:00
Daniel García 93b7ded1e6
Remove unneccessary shim for backtrace 2020-08-12 18:45:26 +02:00
Daniel García 29c6b145ca
Remove redundant user fetching from login 2020-08-11 16:48:15 +02:00
Daniel García a7a479623c
Merge pull request #1087 from jjlin/org-creation-users
Add support for restricting org creation to certain users
2020-08-08 16:20:15 +02:00
Daniel García 83dff9ae6e
Merge pull request #1083 from jjlin/global-domains
Add a script to auto-generate the global equivalent domains JSON file
2020-08-08 16:19:30 +02:00
Daniel García 6b2cc5a3ee
Merge pull request #1089 from jjlin/master
Don't push `latest-arm32v6` tag for MySQL and PostgreSQL images
2020-08-07 20:39:17 +02:00
Jeremy Lin 5247e0d773 Don't push `latest-arm32v6` tag for MySQL and PostgreSQL images 2020-08-07 10:15:15 -07:00
Jeremy Lin 05b308b8b4 Sync global_domains.json with upstream 2020-08-06 12:13:40 -07:00
Jeremy Lin 9621278fca Add a script to auto-generate the global equivalent domains JSON file
The script works by reading the relevant files from the upstream Bitwarden
source repo and generating a matching JSON file. It could potentially be
integrated into the build/release process, but for now it can be run manually
as needed.
2020-08-06 12:12:32 -07:00
Jeremy Lin 570d6c8bf9 Add support for restricting org creation to certain users 2020-08-05 22:35:29 -07:00
Daniel García ad48e9ed0f
Fix unlock on desktop clients 2020-08-04 15:12:04 +02:00
Daniel García f724addf9a
Merge pull request #1076 from jjlin/soft-delete
Fix soft delete notifications
2020-07-28 17:44:33 +02:00
Daniel García aa20974703
Merge pull request #1075 from jjlin/master
Push an extra `latest-arm32v6` tag
2020-07-28 17:43:59 +02:00
Jeremy Lin a846f6c610 Fix soft delete notifications
A soft-deleted entry should now show up in the trash folder immediately
(previously, an extra sync was required).
2020-07-26 16:19:47 -07:00
Jeremy Lin c218c34812 Push an extra `latest-arm32v6` tag
This fixes a gap in PR #1069.
2020-07-26 15:28:14 -07:00
104 changed files with 29819 additions and 4123 deletions

View File

@ -3,6 +3,7 @@ target
# Data folder
data
.env
# IDE files
.vscode
@ -10,5 +11,15 @@ data
*.iml
# Documentation
.github
*.md
*.txt
*.yml
*.yaml
# Docker folders
hooks
tools
# Web vault
web-vault

View File

@ -1,14 +1,28 @@
## Bitwarden_RS Configuration File
## Uncomment any of the following lines to change the defaults
##
## Be aware that most of these settings will be overridden if they were changed
## in the admin interface. Those overrides are stored within DATA_FOLDER/config.json .
## Main data folder
# DATA_FOLDER=data
## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3
## When using MySQL, this it is the URL to the DB, including username and password:
## Format: mysql://[user[:password]@]host/database_name
# DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/diesel/mysql/struct.MysqlConnection.html
# DATABASE_URL=mysql://user:password@host[:port]/database_name
## When using PostgreSQL, specify an appropriate connection URI (recommended)
## or keyword/value connection string.
## Details:
## - https://docs.diesel.rs/diesel/pg/struct.PgConnection.html
## - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
# DATABASE_URL=postgresql://user:password@host[:port]/database_name
## Database max connections
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Individual folders, these override %DATA_FOLDER%
# RSA_KEY_FILENAME=data/rsa_key
@ -60,7 +74,7 @@
## Log level
## Change the verbosity of the log output
## Valid values are "trace", "debug", "info", "warn", "error" and "off"
## Setting it to "trace" or "debug" would also show logs for mounted
## Setting it to "trace" or "debug" would also show logs for mounted
## routes and static file, websocket and alive requests
# LOG_LEVEL=Info
@ -72,6 +86,10 @@
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
## Database connection retries
## Number of times to retry the database connection during startup, with 1 second delay between each retry, set to 0 to retry indefinitely
# DB_CONNECTION_RETRIES=15
## Disable icon downloading
## Set to true to disable icon downloading, this would still serve icons from $ICON_CACHE_FOLDER,
## but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
@ -86,10 +104,11 @@
## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service.
## Useful to hide other servers in the local network. Check the WIKI for more details
# ICON_BLACKLIST_REGEX=192\.168\.1\.[0-9].*^
## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted.
## Usefull to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# ICON_BLACKLIST_NON_GLOBAL_IPS=true
## Disable 2FA remember
@ -97,6 +116,18 @@
## Note that the checkbox would still be present, but ignored.
# DISABLE_2FA_REMEMBER=false
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
## Token expiration time
## Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
# EMAIL_EXPIRATION_TIME=600
## Email token size
## Number of digits in an email token (min: 6, max: 19).
## Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting!
# EMAIL_TOKEN_SIZE=6
## Controls if new users can register
# SIGNUPS_ALLOWED=true
@ -118,6 +149,14 @@
## even if SIGNUPS_ALLOWED is set to false
# SIGNUPS_DOMAINS_WHITELIST=example.com,example.net,example.org
## Controls which users can create new orgs.
## Blank or 'all' means all users can create orgs (this is the default):
# ORG_CREATION_USERS=
## 'none' means no users can create orgs:
# ORG_CREATION_USERS=none
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Token for the admin interface, preferably use a long random string
## One option is to use 'openssl rand -base64 48'
## If not set, the admin panel is disabled
@ -129,6 +168,16 @@
## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Bitwarden_RS
## Per-organization attachment limit (KB)
## Limit in kilobytes for an organization attachments, once the limit is exceeded it won't be possible to upload more
# ORG_ATTACHMENT_LIMIT=
## Per-user attachment limit (KB).
## Limit in kilobytes for a users attachments, once the limit is exceeded it won't be possible to upload more
# USER_ATTACHMENT_LIMIT=
## Controls the PBBKDF password iterations to apply on the server
## The change only applies when the password is changed
@ -144,6 +193,13 @@
## For U2F to work, the server must use HTTPS, you can use Let's Encrypt for free certs
# DOMAIN=https://bw.domain.tld:8443
## Allowed iframe ancestors (Know the risks!)
## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors
## Allows other domains to embed the web vault into an iframe, useful for embedding into secure intranets
## This adds the configured value to the 'Content-Security-Policy' headers 'frame-ancestors' value.
## Multiple values must be separated with a whitespace.
# ALLOWED_IFRAME_ANCESTORS=
## Yubico (Yubikey) Settings
## Set your Client ID and Secret Key for Yubikey OTP
## You can generate it here: https://upgrade.yubico.com/getapikey/
@ -166,7 +222,7 @@
## Authenticator Settings
## Disable authenticator time drifted codes to be valid.
## TOTP codes of the previous and next 30 seconds will be invalid
##
##
## According to the RFC6238 (https://tools.ietf.org/html/rfc6238),
## we allow by default the TOTP code which was valid one step back and one in the future.
## This can however allow attackers to be a bit more lucky with there attempts because there are 3 valid codes.
@ -187,12 +243,45 @@
# SMTP_HOST=smtp.domain.tld
# SMTP_FROM=bitwarden-rs@domain.tld
# SMTP_FROM_NAME=Bitwarden_RS
# SMTP_PORT=587
# SMTP_SSL=true
# SMTP_EXPLICIT_TLS=true # N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851)
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 is outdated and used with Implicit TLS.
# SMTP_SSL=true # (Explicit) - This variable by default configures Explicit STARTTLS, it will upgrade an insecure connection to a secure one. Unless SMTP_EXPLICIT_TLS is set to true. Either port 587 or 25 are default.
# SMTP_EXPLICIT_TLS=true # (Implicit) - N.B. This variable configures Implicit TLS. It's currently mislabelled (see bug #851) - SMTP_SSL Needs to be set to true for this option to work. Usually port 465 is used here.
# SMTP_USERNAME=username
# SMTP_PASSWORD=password
# SMTP_AUTH_MECHANISM="Plain"
# SMTP_TIMEOUT=15
## Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections.
## Possible values: ["Plain", "Login", "Xoauth2"].
## Multiple options need to be separated by a comma ','.
# SMTP_AUTH_MECHANISM="Plain"
## Server name sent during the SMTP HELO
## By default this value should be is on the machine's hostname,
## but might need to be changed in case it trips some anti-spam filters
# HELO_NAME=
## SMTP debugging
## When set to true this will output very detailed SMTP messages.
## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
# SMTP_DEBUG=false
## Accept Invalid Hostnames
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
## Accept Invalid Certificates
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
## If the Certificate is valid but the hostname doesn't match, please use SMTP_ACCEPT_INVALID_HOSTNAMES instead.
# SMTP_ACCEPT_INVALID_CERTS=false
## Require new device emails. When a user logs in an email is required to be sent.
## If sending the email fails the login attempt will fail!!
# REQUIRE_DEVICE_EMAIL=false
## HIBP Api Key
## HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
# HIBP_API_KEY=
# vim: syntax=ini

View File

@ -6,27 +6,36 @@ labels: ''
assignees: ''
---
<!--
# ###
NOTE: Please update to the latest version of bitwarden_rs before reporting an issue!
This saves you and us a lot of time and troubleshooting.
See: https://github.com/dani-garcia/bitwarden_rs/issues/1180
# ###
-->
<!--
Please fill out the following template to make solving your problem easier and faster for us.
This is only a guideline. If you think that parts are unneccessary for your issue, feel free to remove them.
This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them.
Remember to hide/obfuscate personal and confidential information,
such as names, global IP/DNS adresses and especially passwords, if neccessary.
Remember to hide/obfuscate personal and confidential information,
such as names, global IP/DNS addresses and especially passwords, if necessary.
-->
### Subject of the issue
<!-- Describe your issue here.-->
### Your environment
<!-- The version number, obtained from the logs or the admin page -->
* Bitwarden_rs version:
<!-- The version number, obtained from the logs or the admin diagnostics page -->
<!-- Remember to check your issue on the latest version first! -->
* Bitwarden_rs version:
<!-- How the server was installed: Docker image / package / built from source -->
* Install method:
* Install method:
* Clients used: <!-- if applicable -->
* Reverse proxy and version: <!-- if applicable -->
* Version of mysql/postgresql: <!-- if applicable -->
* Other relevant information:
* Other relevant information:
### Steps to reproduce
<!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults)

125
.github/workflows/build.yml vendored Normal file
View File

@ -0,0 +1,125 @@
name: Build
on:
push:
# Ignore when there are only changes done too one of these paths
paths-ignore:
- "**.md"
- "**.txt"
- "azure-pipelines.yml"
- "docker/**"
- "hooks/**"
- "tools/**"
jobs:
build:
strategy:
fail-fast: false
matrix:
channel:
- nightly
# - stable
target-triple:
- x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl
include:
- target-triple: x86_64-unknown-linux-gnu
host-triple: x86_64-unknown-linux-gnu
features: "sqlite,mysql,postgresql"
channel: nightly
os: ubuntu-18.04
ext:
# - target-triple: x86_64-unknown-linux-gnu
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,mysql,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext:
# - target-triple: x86_64-unknown-linux-musl
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,postgresql"
# channel: nightly
# os: ubuntu-18.04
# ext:
# - target-triple: x86_64-unknown-linux-musl
# host-triple: x86_64-unknown-linux-gnu
# features: "sqlite,postgresql"
# channel: stable
# os: ubuntu-18.04
# ext:
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
runs-on: ${{ matrix.os }}
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@v2
# End Checkout the repo
# Install musl-tools when needed
- name: Install musl tools
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends musl-dev musl-tools cmake
if: matrix.target-triple == 'x86_64-unknown-linux-musl'
# End Install musl-tools when needed
# Install dependencies
- name: Install dependencies Ubuntu
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends openssl sqlite build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
if: startsWith( matrix.os, 'ubuntu' )
# End Install dependencies
# Enable Rust Caching
- uses: Swatinem/rust-cache@v1
# End Enable Rust Caching
# Uses the rust-toolchain file to determine version
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
uses: actions-rs/toolchain@v1
with:
profile: minimal
target: ${{ matrix.target-triple }}
# End Uses the rust-toolchain file to determine version
# Run cargo tests (In release mode to speed up cargo build afterwards)
- name: '`cargo test --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1
with:
command: test
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}
# End Run cargo tests
# Build the binary
- name: '`cargo build --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}`'
uses: actions-rs/cargo@v1
with:
command: build
args: --release --features ${{ matrix.features }} --target ${{ matrix.target-triple }}
# End Build the binary
# Upload artifact to Github Actions
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: bitwarden_rs-${{ matrix.target-triple }}${{ matrix.ext }}
path: target/${{ matrix.target-triple }}/release/bitwarden_rs${{ matrix.ext }}
# End Upload artifact to Github Actions
## This is not used at the moment
## We could start using this when we can build static binaries
# Upload to github actions release
# - name: Release
# uses: Shopify/upload-to-release@1
# if: startsWith(github.ref, 'refs/tags/')
# with:
# name: bitwarden_rs-${{ matrix.target-triple }}${{ matrix.ext }}
# path: target/${{ matrix.target-triple }}/release/bitwarden_rs${{ matrix.ext }}
# repo-token: ${{ secrets.GITHUB_TOKEN }}
# End Upload to github actions release

34
.github/workflows/hadolint.yml vendored Normal file
View File

@ -0,0 +1,34 @@
name: Hadolint
on:
pull_request:
# Ignore when there are only changes done too one of these paths
paths:
- "docker/**"
jobs:
hadolint:
name: Validate Dockerfile syntax
runs-on: ubuntu-20.04
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@v2
# End Checkout the repo
# Download hadolint
- name: Download hadolint
shell: bash
run: |
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
sudo chmod +x /usr/local/bin/hadolint
env:
HADOLINT_VERSION: 1.19.0
# End Download hadolint
# Test Dockerfiles
- name: Run hadolint
shell: bash
run: git ls-files --exclude='docker/*/Dockerfile*' --ignored | xargs hadolint
# End Test Dockerfiles

View File

@ -1,148 +0,0 @@
name: Workflow
on:
push:
paths-ignore:
- "**.md"
#pull_request:
# paths-ignore:
# - "**.md"
jobs:
build:
name: Build
strategy:
fail-fast: false
matrix:
db-backend: [sqlite, mysql, postgresql]
target:
- x86_64-unknown-linux-gnu
# - x86_64-unknown-linux-musl
# - x86_64-apple-darwin
# - x86_64-pc-windows-msvc
include:
- target: x86_64-unknown-linux-gnu
os: ubuntu-latest
ext:
# - target: x86_64-unknown-linux-musl
# os: ubuntu-latest
# ext:
# - target: x86_64-apple-darwin
# os: macOS-latest
# ext:
# - target: x86_64-pc-windows-msvc
# os: windows-latest
# ext: .exe
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v1
# - name: Cache choco cache
# uses: actions/cache@v1.0.3
# if: matrix.os == 'windows-latest'
# with:
# path: ~\AppData\Local\Temp\chocolatey
# key: ${{ runner.os }}-choco-cache-${{ matrix.db-backend }}
- name: Cache vcpkg installed
uses: actions/cache@v1.0.3
if: matrix.os == 'windows-latest'
with:
path: $VCPKG_ROOT/installed
key: ${{ runner.os }}-vcpkg-cache-${{ matrix.db-backend }}
env:
VCPKG_ROOT: 'C:\vcpkg'
- name: Cache vcpkg downloads
uses: actions/cache@v1.0.3
if: matrix.os == 'windows-latest'
with:
path: $VCPKG_ROOT/downloads
key: ${{ runner.os }}-vcpkg-cache-${{ matrix.db-backend }}
env:
VCPKG_ROOT: 'C:\vcpkg'
# - name: Cache homebrew
# uses: actions/cache@v1.0.3
# if: matrix.os == 'macOS-latest'
# with:
# path: ~/Library/Caches/Homebrew
# key: ${{ runner.os }}-brew-cache
# - name: Cache apt
# uses: actions/cache@v1.0.3
# if: matrix.os == 'ubuntu-latest'
# with:
# path: /var/cache/apt/archives
# key: ${{ runner.os }}-apt-cache
# Install dependencies
- name: Install dependencies macOS
run: brew update; brew install openssl sqlite libpq mysql
if: matrix.os == 'macOS-latest'
- name: Install dependencies Ubuntu
run: sudo apt-get update && sudo apt-get install --no-install-recommends openssl sqlite libpq-dev libmysql++-dev
if: matrix.os == 'ubuntu-latest'
- name: Install dependencies Windows
run: vcpkg integrate install; vcpkg install sqlite3:x64-windows openssl:x64-windows libpq:x64-windows libmysql:x64-windows
if: matrix.os == 'windows-latest'
env:
VCPKG_ROOT: 'C:\vcpkg'
# End Install dependencies
# Install rust nightly toolchain
- name: Cache cargo registry
uses: actions/cache@v1.0.3
with:
path: ~/.cargo/registry
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
- name: Cache cargo index
uses: actions/cache@v1.0.3
with:
path: ~/.cargo/git
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-index-${{ hashFiles('**/Cargo.lock') }}
- name: Cache cargo build
uses: actions/cache@v1.0.3
with:
path: target
key: ${{ runner.os }}-${{matrix.db-backend}}-cargo-build-target-${{ hashFiles('**/Cargo.lock') }}
- name: Install latest nightly
uses: actions-rs/toolchain@v1.0.5
with:
# Uses rust-toolchain to determine version
profile: minimal
target: ${{ matrix.target }}
# Build
- name: Build Win
if: matrix.os == 'windows-latest'
run: cargo.exe build --features ${{ matrix.db-backend }} --release --target ${{ matrix.target }}
env:
RUSTFLAGS: -Ctarget-feature=+crt-static
VCPKG_ROOT: 'C:\vcpkg'
- name: Build macOS / Ubuntu
if: matrix.os == 'macOS-latest' || matrix.os == 'ubuntu-latest'
run: cargo build --verbose --features ${{ matrix.db-backend }} --release --target ${{ matrix.target }}
# Test
- name: Run tests
run: cargo test --features ${{ matrix.db-backend }}
# Upload & Release
- name: Upload artifact
uses: actions/upload-artifact@v1.0.0
with:
name: bitwarden_rs-${{ matrix.db-backend }}-${{ matrix.target }}${{ matrix.ext }}
path: target/${{ matrix.target }}/release/bitwarden_rs${{ matrix.ext }}
- name: Release
uses: Shopify/upload-to-release@1.0.0
if: startsWith(github.ref, 'refs/tags/')
with:
name: bitwarden_rs-${{ matrix.db-backend }}-${{ matrix.target }}${{ matrix.ext }}
path: target/${{ matrix.target }}/release/bitwarden_rs${{ matrix.ext }}
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,21 +0,0 @@
dist: xenial
env:
global:
- HADOLINT_VERSION=1.17.1
language: rust
rust: nightly
cache: cargo
before_install:
- sudo curl -L https://github.com/hadolint/hadolint/releases/download/v$HADOLINT_VERSION/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint
- sudo chmod +rx /usr/local/bin/hadolint
- rustup set profile minimal
# Nothing to install
install: true
script:
- git ls-files --exclude='Dockerfile*' --ignored | xargs --max-lines=1 hadolint
- cargo test --features "sqlite"
- cargo test --features "mysql"

984
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -16,9 +16,11 @@ enable_syslog = []
mysql = ["diesel/mysql", "diesel_migrations/mysql"]
postgresql = ["diesel/postgres", "diesel_migrations/postgres"]
sqlite = ["diesel/sqlite", "diesel_migrations/sqlite", "libsqlite3-sys"]
# Enable to use a vendored and statically linked openssl
vendored_openssl = ["openssl/vendored"]
# Enable unstable features, requires nightly
# Currently only used to enable rusts official ip support
# Currently only used to enable rusts official ip support
unstable = []
[target."cfg(not(windows))".dependencies]
@ -30,24 +32,24 @@ rocket = { version = "0.5.0-dev", features = ["tls"], default-features = false }
rocket_contrib = "0.5.0-dev"
# HTTP client
reqwest = { version = "0.10.6", features = ["blocking", "json"] }
reqwest = { version = "0.10.10", features = ["blocking", "json"] }
# multipart/form-data support
multipart = { version = "0.17.0", features = ["server"], default-features = false }
# WebSockets library
ws = "0.9.1"
ws = { version = "0.10.0", package = "parity-ws" }
# MessagePack library
rmpv = "0.4.4"
rmpv = "0.4.6"
# Concurrent hashmap implementation
chashmap = "2.2.2"
# A generic serialization/deserialization framework
serde = "1.0.114"
serde_derive = "1.0.114"
serde_json = "1.0.56"
serde = "1.0.118"
serde_derive = "1.0.118"
serde_json = "1.0.60"
# Logging
log = "0.4.11"
@ -60,22 +62,23 @@ diesel_migrations = "1.4.0"
# Bundled SQLite
libsqlite3-sys = { version = "0.18.0", features = ["bundled"], optional = true }
# Crypto library
ring = "0.16.15"
# Crypto-related libraries
rand = "0.7.3"
ring = "0.16.19"
# UUID generation
uuid = { version = "0.8.1", features = ["v4"] }
# Date and time libraries
chrono = "0.4.13"
chrono-tz = "0.5.2"
time = "0.2.16"
chrono = "0.4.19"
chrono-tz = "0.5.3"
time = "0.2.23"
# TOTP library
oath = "0.10.2"
# Data encoding library
data-encoding = "2.2.1"
data-encoding = "2.3.1"
# JWT library
jsonwebtoken = "7.2.0"
@ -90,26 +93,26 @@ yubico = { version = "0.9.1", features = ["online-tokio"], default-features = fa
dotenv = { version = "0.15.0", default-features = false }
# Lazy initialization
once_cell = "1.4.0"
once_cell = "1.5.2"
# Numerical libraries
num-traits = "0.2.12"
num-derive = "0.3.0"
num-traits = "0.2.14"
num-derive = "0.3.3"
# Email libraries
lettre = { version = "0.10.0-alpha.1", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname"], default-features = false }
native-tls = "0.2.4"
lettre = { version = "0.10.0-alpha.4", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
newline-converter = "0.1.0"
# Template library
handlebars = { version = "3.3.0", features = ["dir_source"] }
handlebars = { version = "3.5.1", features = ["dir_source"] }
# For favicon extraction from main website
soup = "0.5.0"
regex = "1.3.9"
regex = "1.4.2"
data-url = "0.1.0"
# Used by U2F, JWT and Postgres
openssl = "0.10.30"
openssl = "0.10.31"
# URL encoding library
percent-encoding = "2.1.0"
@ -117,15 +120,18 @@ percent-encoding = "2.1.0"
idna = "0.2.0"
# CLI argument parsing
structopt = "0.3.15"
structopt = "0.3.21"
# Logging panics to logfile instead stderr only
backtrace = "0.3.50"
backtrace = "0.3.55"
# Macro ident concatenation
paste = "1.0.4"
[patch.crates-io]
# Use newest ring
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '1010f6a2a88fac899dec0cd2f642156908038a53' }
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '1010f6a2a88fac899dec0cd2f642156908038a53' }
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
# For favicon extraction from main website
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '7f1bd6ce1c2fde599a757302a843a60e714c5f72' }
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = '540ede02d0771824c0c80ff9f57fe8eff38b1291' }

View File

@ -1 +1 @@
docker/amd64/sqlite/Dockerfile
docker/amd64/Dockerfile

View File

@ -21,7 +21,6 @@ Image is based on [Rust implementation of Bitwarden API](https://github.com/dani
Basically full implementation of Bitwarden API is provided including:
* Single user functionality
* Organizations support
* Attachments
* Vault API support
@ -59,3 +58,4 @@ If you prefer to chat, we're usually hanging around at [#bitwarden_rs:matrix.org
Thanks for your contribution to the project!
- [@ChonoN](https://github.com/ChonoN)
- [@themightychris](https://github.com/themightychris)

View File

@ -1,5 +1,5 @@
pool:
vmImage: 'Ubuntu-16.04'
vmImage: 'Ubuntu-18.04'
steps:
- script: |
@ -10,16 +10,13 @@ steps:
- script: |
sudo apt-get update
sudo apt-get install -y libmysql++-dev
displayName: Install libmysql
sudo apt-get install -y --no-install-recommends build-essential libmariadb-dev-compat libpq-dev libssl-dev pkgconf
displayName: 'Install build libraries.'
- script: |
rustc -Vv
cargo -V
displayName: Query rust and cargo versions
- script : cargo test --features "sqlite"
displayName: 'Test project with sqlite backend'
- script : cargo test --features "mysql"
displayName: 'Test project with mysql backend'
- script : cargo test --features "sqlite,mysql,postgresql"
displayName: 'Test project with sqlite, mysql and postgresql backends'

View File

@ -1,13 +1,14 @@
use std::process::Command;
use std::env;
fn main() {
#[cfg(all(feature = "sqlite", feature = "mysql"))]
compile_error!("Can't enable both sqlite and mysql at the same time");
#[cfg(all(feature = "sqlite", feature = "postgresql"))]
compile_error!("Can't enable both sqlite and postgresql at the same time");
#[cfg(all(feature = "mysql", feature = "postgresql"))]
compile_error!("Can't enable both mysql and postgresql at the same time");
fn main() {
// This allow using #[cfg(sqlite)] instead of #[cfg(feature = "sqlite")], which helps when trying to add them through macros
#[cfg(feature = "sqlite")]
println!("cargo:rustc-cfg=sqlite");
#[cfg(feature = "mysql")]
println!("cargo:rustc-cfg=mysql");
#[cfg(feature = "postgresql")]
println!("cargo:rustc-cfg=postgresql");
#[cfg(not(any(feature = "sqlite", feature = "mysql", feature = "postgresql")))]
compile_error!("You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite");

View File

@ -1,68 +1,82 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
{% set build_stage_base_image = "rust:1.40" %}
{% set build_stage_base_image = "rust:1.48" %}
{% if "alpine" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2020-03-09" %}
{% set runtime_stage_base_image = "alpine:3.11" %}
{% set package_arch_name = "" %}
{% if "amd64" in target_file %}
{% set build_stage_base_image = "clux/muslrust:nightly-2020-11-22" %}
{% set runtime_stage_base_image = "alpine:3.12" %}
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
{% elif "arm32v7" in target_file %}
{% set build_stage_base_image = "messense/rust-musl-cross:armv7-musleabihf" %}
{% set runtime_stage_base_image = "balenalib/armv7hf-alpine:3.12" %}
{% set package_arch_target = "armv7-unknown-linux-musleabihf" %}
{% endif %}
{% elif "amd64" in target_file %}
{% set runtime_stage_base_image = "debian:buster-slim" %}
{% set package_arch_name = "" %}
{% elif "arm64v8" in target_file %}
{% set runtime_stage_base_image = "balenalib/aarch64-debian:buster" %}
{% set package_arch_name = "arm64" %}
{% set package_arch_target = "aarch64-unknown-linux-gnu" %}
{% set package_cross_compiler = "aarch64-linux-gnu" %}
{% elif "arm32v6" in target_file %}
{% set runtime_stage_base_image = "balenalib/rpi-debian:buster" %}
{% set package_arch_name = "armel" %}
{% set package_arch_target = "arm-unknown-linux-gnueabi" %}
{% set package_cross_compiler = "arm-linux-gnueabi" %}
{% elif "arm32v7" in target_file %}
{% set runtime_stage_base_image = "balenalib/armv7hf-debian:buster" %}
{% set package_arch_name = "armhf" %}
{% set package_arch_target = "armv7-unknown-linux-gnueabihf" %}
{% set package_cross_compiler = "arm-linux-gnueabihf" %}
{% endif %}
{% set package_arch_prefix = ":" + package_arch_name %}
{% if package_arch_name == "" %}
{% if package_arch_name is defined %}
{% set package_arch_prefix = ":" + package_arch_name %}
{% else %}
{% set package_arch_prefix = "" %}
{% endif %}
{% if package_arch_target is defined %}
{% set package_arch_target_param = " --target=" + package_arch_target %}
{% else %}
{% set package_arch_target_param = "" %}
{% endif %}
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
{% set vault_image_hash = "sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c" %}
{% set vault_image_hash = "sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0" %}
{% raw %}
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
{% endraw %}
FROM bitwardenrs/web-vault@{{ vault_image_hash }} as vault
########################## BUILD IMAGE ##########################
{% if "musl" in build_stage_base_image %}
# Musl build image for statically compiled binary
{% else %}
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
{% endif %}
FROM {{ build_stage_base_image }} as build
{% if "sqlite" in target_file %}
# set sqlite as default for DB ARG for backward compatibility
{% if "alpine" in target_file %}
{% if "amd64" in target_file %}
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
{% set features = "sqlite,postgresql" %}
{% else %}
# Alpine-based ARM (musl) only supports sqlite during compile time.
ARG DB=sqlite
{% elif "mysql" in target_file %}
# set mysql backend
ARG DB=mysql
{% elif "postgresql" in target_file %}
# set postgresql backend
ARG DB=postgresql
{% set features = "sqlite" %}
{% endif %}
{% else %}
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
{% set features = "sqlite,mysql,postgresql" %}
{% endif %}
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -72,9 +86,9 @@ RUN rustup set profile minimal
{% if "alpine" in target_file %}
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
{% elif "arm32" in target_file or "arm64" in target_file %}
{% elif "arm" in target_file %}
# Install required build libs for {{ package_arch_name }} architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture {{ package_arch_name }} \
@ -82,65 +96,34 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
&& apt-get install -y \
--no-install-recommends \
libssl-dev{{ package_arch_prefix }} \
libc6-dev{{ package_arch_prefix }}
libc6-dev{{ package_arch_prefix }} \
libpq5{{ package_arch_prefix }} \
libpq-dev \
libmariadb-dev{{ package_arch_prefix }} \
libmariadb-dev-compat{{ package_arch_prefix }}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-{{ package_cross_compiler }} \
&& mkdir -p ~/.cargo \
&& echo '[target.{{ package_arch_target }}]' >> ~/.cargo/config \
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif -%}
{% if "arm64v8" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "arm32v6" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% elif "arm32v7" in target_file %}
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
{% endif %}
{% if "mysql" in target_file %}
# Install MySQL package
{% if "amd64" in target_file and "alpine" not in target_file %}
# Install DB packages
RUN apt-get update && apt-get install -y \
--no-install-recommends \
{% if "musl" in build_stage_base_image %}
libmysqlclient-dev{{ package_arch_prefix }} \
{% else %}
libmariadb-dev{{ package_arch_prefix }} \
{% endif %}
&& rm -rf /var/lib/apt/lists/*
{% elif "postgresql" in target_file %}
# Install PostgreSQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libpq-dev{{ package_arch_prefix }} \
&& rm -rf /var/lib/apt/lists/*
{% endif %}
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@ -150,39 +133,38 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
{% if "arm64v8" in target_file %}
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
{% if "alpine" not in target_file %}
{% if "arm" in target_file %}
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
{% elif "arm32v6" in target_file %}
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
{% elif "arm32v7" in target_file %}
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
{% endif -%}
{% if "alpine" in target_file %}
RUN rustup target add x86_64-unknown-linux-musl
{% elif "arm64v8" in target_file %}
RUN rustup target add aarch64-unknown-linux-gnu
{% elif "arm32v6" in target_file %}
RUN rustup target add arm-unknown-linux-gnueabi
{% elif "arm32v7" in target_file %}
RUN rustup target add armv7-unknown-linux-gnueabihf
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
{% endif -%}
{% endif %}
{% if package_arch_target is defined %}
RUN rustup target add {{ package_arch_target }}
{% endif %}
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
RUN find . -not -path "./target*" -delete
# Copies the complete project
@ -194,14 +176,11 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
{% if "amd64" in target_file %}
RUN cargo build --features ${DB} --release
{% elif "arm64v8" in target_file %}
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
{% elif "arm32v6" in target_file %}
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
{% elif "arm32v7" in target_file %}
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
{% if "alpine" in target_file %}
{% if "arm32v7" in target_file %}
RUN musl-strip target/{{ package_arch_target }}/release/bitwarden_rs
{% endif %}
{% endif %}
######################## RUNTIME IMAGE ########################
@ -225,11 +204,13 @@ RUN [ "cross-build-start" ]
RUN apk add --no-cache \
openssl \
curl \
{% if "sqlite" in target_file %}
{% if "sqlite" in features %}
sqlite \
{% elif "mysql" in target_file %}
{% endif %}
{% if "mysql" in features %}
mariadb-connector-c \
{% elif "postgresql" in target_file %}
{% endif %}
{% if "postgresql" in features %}
postgresql-libs \
{% endif %}
ca-certificates
@ -239,15 +220,14 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
{% if "sqlite" in target_file %}
sqlite3 \
{% elif "mysql" in target_file %}
libmariadbclient-dev \
{% elif "postgresql" in target_file %}
libmariadb-dev-compat \
libpq5 \
{% endif %}
&& rm -rf /var/lib/apt/lists/*
{% endif %}
{% if "alpine" in target_file and "arm32v7" in target_file %}
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community catatonit
{% endif %}
RUN mkdir /data
{% if "amd64" not in target_file %}
@ -263,16 +243,10 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
{% if "alpine" in target_file %}
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
{% elif "arm64v8" in target_file %}
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs .
{% elif "arm32v6" in target_file %}
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs .
{% elif "arm32v7" in target_file %}
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs .
{% if package_arch_target is defined %}
COPY --from=build /app/target/{{ package_arch_target }}/release/bitwarden_rs .
{% else %}
COPY --from=build app/target/release/bitwarden_rs .
COPY --from=build /app/target/release/bitwarden_rs .
{% endif %}
COPY docker/healthcheck.sh /healthcheck.sh
@ -282,4 +256,8 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
{% if "alpine" in target_file and "arm32v7" in target_file %}
CMD ["catatonit", "/start.sh"]
{% else %}
CMD ["/start.sh"]
{% endif %}

View File

@ -6,24 +6,22 @@
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
FROM rust:1.48 as build
# set postgresql backend
ARG DB=postgresql
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -31,9 +29,10 @@ ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
# Install PostgreSQL package
# Install DB packages
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
@ -46,6 +45,7 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
@ -78,6 +78,8 @@ RUN apt-get update && apt-get install -y \
openssl \
ca-certificates \
curl \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
&& rm -rf /var/lib/apt/lists/*
@ -90,7 +92,7 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs .
COPY --from=build /app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
@ -100,3 +102,4 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -6,23 +6,22 @@
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
########################## BUILD IMAGE ##########################
# Musl build image for statically compiled binary
FROM clux/muslrust:nightly-2020-03-09 as build
FROM clux/muslrust:nightly-2020-11-22 as build
# set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
ARG DB=sqlite,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -47,7 +46,7 @@ RUN rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
RUN find . -not -path "./target*" -delete
# Copies the complete project
@ -59,12 +58,12 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.11
FROM alpine:3.12
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
@ -76,6 +75,7 @@ RUN apk add --no-cache \
openssl \
curl \
sqlite \
postgresql-libs \
ca-certificates
RUN mkdir /data
@ -97,3 +97,4 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -1,102 +0,0 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
# set mysql backend
ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:buster-slim
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
# Install needed libraries
RUN apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadbclient-dev \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -1,105 +0,0 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
########################## BUILD IMAGE ##########################
# Musl build image for statically compiled binary
FROM clux/muslrust:nightly-2020-03-09 as build
# set mysql backend
ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmysqlclient-dev \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.11
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs
# Install needed libraries
RUN apk add --no-cache \
openssl \
curl \
mariadb-connector-c \
ca-certificates
RUN mkdir /data
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -1,96 +0,0 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
# set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM debian:buster-slim
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
# Install needed libraries
RUN apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
sqlite3 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build app/target/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -6,24 +6,22 @@
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
FROM rust:1.48 as build
# set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -32,6 +30,7 @@ ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
RUN rustup set profile minimal
# Install required build libs for armel architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \
@ -39,7 +38,11 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel
libc6-dev:armel \
libpq5:armel \
libpq-dev \
libmariadb-dev:armel \
libmariadb-dev-compat:armel
RUN apt-get update \
&& apt-get install -y \
@ -47,7 +50,8 @@ RUN apt-get update \
gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
@ -61,6 +65,22 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
@ -70,7 +90,7 @@ RUN rustup target add arm-unknown-linux-gnueabi
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
RUN find . -not -path "./target*" -delete
# Copies the complete project
@ -102,6 +122,8 @@ RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
@ -126,3 +148,4 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -1,134 +0,0 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
# set mysql backend
ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
# Install required build libs for armel architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armel \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armel \
libc6-dev:armel
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabi \
&& mkdir -p ~/.cargo \
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev:armel \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
RUN rustup target add arm-unknown-linux-gnueabi
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/rpi-debian:buster
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
RUN [ "cross-build-start" ]
# Install needed libraries
RUN apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadbclient-dev \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -6,24 +6,22 @@
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
FROM rust:1.48 as build
# set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -32,6 +30,7 @@ ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
RUN rustup set profile minimal
# Install required build libs for armhf architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \
@ -39,7 +38,11 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf
libc6-dev:armhf \
libpq5:armhf \
libpq-dev \
libmariadb-dev:armhf \
libmariadb-dev-compat:armhf
RUN apt-get update \
&& apt-get install -y \
@ -47,7 +50,8 @@ RUN apt-get update \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
@ -61,15 +65,32 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
RUN find . -not -path "./target*" -delete
# Copies the complete project
@ -101,6 +122,8 @@ RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
@ -125,3 +148,4 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -6,23 +6,22 @@
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
########################## BUILD IMAGE ##########################
# Musl build image for statically compiled binary
FROM clux/muslrust:nightly-2020-03-09 as build
FROM messense/rust-musl-cross:armv7-musleabihf as build
# set postgresql backend
ARG DB=postgresql
# Alpine-based ARM (musl) only supports sqlite during compile time.
ARG DB=sqlite
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -33,12 +32,6 @@ RUN rustup set profile minimal
ENV USER "root"
ENV RUSTFLAGS='-C link-arg=-s'
# Install PostgreSQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
@ -48,12 +41,12 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
RUN rustup target add x86_64-unknown-linux-musl
RUN rustup target add armv7-unknown-linux-musleabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
RUN find . -not -path "./target*" -delete
# Copies the complete project
@ -65,26 +58,33 @@ RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/bitwarden_rs
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM alpine:3.11
FROM balenalib/armv7hf-alpine:3.12
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
ENV SSL_CERT_DIR=/etc/ssl/certs
RUN [ "cross-build-start" ]
# Install needed libraries
RUN apk add --no-cache \
openssl \
curl \
postgresql-libs \
sqlite \
ca-certificates
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community catatonit
RUN mkdir /data
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
@ -93,7 +93,7 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/bitwarden_rs .
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
@ -102,4 +102,5 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]
CMD ["catatonit", "/start.sh"]

View File

@ -1,133 +0,0 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
# set mysql backend
ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
# Install required build libs for armhf architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:armhf \
libc6-dev:armhf
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-arm-linux-gnueabihf \
&& mkdir -p ~/.cargo \
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev:armhf \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
RUN rustup target add armv7-unknown-linux-gnueabihf
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/armv7hf-debian:buster
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
RUN [ "cross-build-start" ]
# Install needed libraries
RUN apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadbclient-dev \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -6,24 +6,22 @@
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# This hash is extracted from the docker web-vault builds and it's preferred over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
# docker pull bitwardenrs/web-vault:v2.17.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.17.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0
FROM bitwardenrs/web-vault@sha256:dcb7884dc5845b3842ff2204fe77482000b771495c6c359297ec3c03330d65e0 as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
FROM rust:1.48 as build
# set sqlite as default for DB ARG for backward compatibility
ARG DB=sqlite
# Debian-based builds support multidb
ARG DB=sqlite,mysql,postgresql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
@ -32,6 +30,7 @@ ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
RUN rustup set profile minimal
# Install required build libs for arm64 architecture.
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \
@ -39,7 +38,11 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64
libc6-dev:arm64 \
libpq5:arm64 \
libpq-dev \
libmariadb-dev:arm64 \
libmariadb-dev-compat:arm64
RUN apt-get update \
&& apt-get install -y \
@ -47,7 +50,8 @@ RUN apt-get update \
gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config \
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
@ -61,6 +65,22 @@ COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
# What we can do is a force install, because nothing important is overlapping each other.
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 && \
apt-get download libmariadb-dev-compat:amd64 && \
dpkg --force-all -i ./libmariadb-dev-compat*.deb && \
rm -rvf ./libmariadb-dev-compat*.deb
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
# Without this specific file the ld command will fail and compilation fails with it.
RUN ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
@ -70,7 +90,7 @@ RUN rustup target add aarch64-unknown-linux-gnu
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
RUN find . -not -path "./target*" -delete
# Copies the complete project
@ -102,6 +122,8 @@ RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
sqlite3 \
libmariadb-dev-compat \
libpq5 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
@ -126,3 +148,4 @@ HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -1,134 +0,0 @@
# This file was generated using a Jinja2 template.
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfile's.
# Using multistage build:
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://whitfin.io/speeding-up-rust-docker-builds/
####################### VAULT BUILD IMAGE #######################
# This hash is extracted from the docker web-vault builds and it's prefered over a simple tag because it's immutable.
# It can be viewed in multiple ways:
# - From the https://hub.docker.com/repository/docker/bitwardenrs/web-vault/tags page, click the tag name and the digest should be there.
# - From the console, with the following commands:
# docker pull bitwardenrs/web-vault:v2.15.1
# docker image inspect --format "{{.RepoDigests}}" bitwardenrs/web-vault:v2.15.1
#
# - To do the opposite, and get the tag from the hash, you can do:
# docker image inspect --format "{{.RepoTags}}" bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c
FROM bitwardenrs/web-vault@sha256:afba1e3bded09dc0a6a0dbacb3363ac33b6f122b4b26d3682cafb9115bdf785c as vault
########################## BUILD IMAGE ##########################
# We need to use the Rust build image, because
# we need the Rust compiler and Cargo tooling
FROM rust:1.40 as build
# set mysql backend
ARG DB=mysql
# Build time options to avoid dpkg warnings and help with reproducible builds.
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
# Don't download rust docs
RUN rustup set profile minimal
# Install required build libs for arm64 architecture.
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
/etc/apt/sources.list.d/deb-src.list \
&& dpkg --add-architecture arm64 \
&& apt-get update \
&& apt-get install -y \
--no-install-recommends \
libssl-dev:arm64 \
libc6-dev:arm64
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc-aarch64-linux-gnu \
&& mkdir -p ~/.cargo \
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
ENV CARGO_HOME "/root/.cargo"
ENV USER "root"
# Install MySQL package
RUN apt-get update && apt-get install -y \
--no-install-recommends \
libmariadb-dev:arm64 \
&& rm -rf /var/lib/apt/lists/*
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain ./rust-toolchain
COPY ./build.rs ./build.rs
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
ENV CROSS_COMPILE="1"
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
RUN rustup target add aarch64-unknown-linux-gnu
# Builds your dependencies and removes the
# dummy project, except the target folder
# This folder contains the compiled dependencies
RUN cargo build --features ${DB} --release
RUN find . -not -path "./target*" -delete
# Copies the complete project
# To avoid copying unneeded files, use .dockerignore
COPY . .
# Make sure that we actually build the project
RUN touch src/main.rs
# Builds again, this time it'll just be
# your actual source files being built
RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
######################## RUNTIME IMAGE ########################
# Create a new stage with a minimal image
# because we already have a binary built
FROM balenalib/aarch64-debian:buster
ENV ROCKET_ENV "staging"
ENV ROCKET_PORT=80
ENV ROCKET_WORKERS=10
RUN [ "cross-build-start" ]
# Install needed libraries
RUN apt-get update && apt-get install -y \
--no-install-recommends \
openssl \
ca-certificates \
curl \
libmariadbclient-dev \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /data
RUN [ "cross-build-end" ]
VOLUME /data
EXPOSE 80
EXPOSE 3012
# Copies the files from the context (Rocket.toml file and web-vault)
# and the binary from the "build" stage to the current stage
COPY Rocket.toml .
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/bitwarden_rs .
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
# Configures the startup!
WORKDIR /
CMD ["/start.sh"]

View File

@ -1,6 +1,6 @@
# The default Debian-based SQLite images support these arches.
# The default Debian-based images support these arches for all database connections
#
# Other images (Alpine-based, or with other database backends) currently
# Other images (Alpine-based) currently
# support only a subset of these.
arches=(
amd64
@ -9,22 +9,11 @@ arches=(
arm64v8
)
case "${DOCKER_REPO}" in
*-mysql)
db=mysql
arches=(amd64)
;;
*-postgresql)
db=postgresql
arches=(amd64)
;;
*)
db=sqlite
;;
esac
if [[ "${DOCKER_TAG}" == *alpine ]]; then
# The Alpine build currently only works for amd64.
os_suffix=.alpine
arches=(amd64)
arches=(
amd64
arm32v7
)
fi

View File

@ -9,6 +9,6 @@ set -ex
for arch in "${arches[@]}"; do
docker build \
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
-f docker/${arch}/${db}/Dockerfile${os_suffix} \
-f docker/${arch}/Dockerfile${os_suffix} \
.
done

View File

@ -36,6 +36,21 @@ if [[ "${DOCKER_TAG}" =~ ^[0-9]+\.[0-9]+\.[0-9]+ ]]; then
manifest_lists+=(${DOCKER_REPO}:alpine)
else
manifest_lists+=(${DOCKER_REPO}:latest)
# Add an extra `latest-arm32v6` tag; Docker can't seem to properly
# auto-select that image on Armv6 platforms like Raspberry Pi 1 and Zero
# (https://github.com/moby/moby/issues/41017).
#
# Add this tag only for the SQLite image, as the MySQL and PostgreSQL
# builds don't currently work on non-amd64 arches.
#
# TODO: Also add an `alpine-arm32v6` tag if multi-arch support for
# Alpine-based bitwarden_rs images is implemented before this Docker
# issue is fixed.
if [[ ${DOCKER_REPO} == *server ]]; then
docker tag "${DOCKER_REPO}:${DOCKER_TAG}-arm32v6" "${DOCKER_REPO}:latest-arm32v6"
docker push "${DOCKER_REPO}:latest-arm32v6"
fi
fi
fi

View File

@ -0,0 +1,13 @@
ALTER TABLE ciphers
ADD COLUMN favorite BOOLEAN NOT NULL DEFAULT FALSE;
-- Transfer favorite status for user-owned ciphers.
UPDATE ciphers
SET favorite = TRUE
WHERE EXISTS (
SELECT * FROM favorites
WHERE favorites.user_uuid = ciphers.user_uuid
AND favorites.cipher_uuid = ciphers.uuid
);
DROP TABLE favorites;

View File

@ -0,0 +1,16 @@
CREATE TABLE favorites (
user_uuid CHAR(36) NOT NULL REFERENCES users(uuid),
cipher_uuid CHAR(36) NOT NULL REFERENCES ciphers(uuid),
PRIMARY KEY (user_uuid, cipher_uuid)
);
-- Transfer favorite status for user-owned ciphers.
INSERT INTO favorites(user_uuid, cipher_uuid)
SELECT user_uuid, uuid
FROM ciphers
WHERE favorite = TRUE
AND user_uuid IS NOT NULL;
ALTER TABLE ciphers
DROP COLUMN favorite;

View File

@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT 1;

View File

@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN stamp_exception TEXT DEFAULT NULL;

View File

@ -0,0 +1,13 @@
ALTER TABLE ciphers
ADD COLUMN favorite BOOLEAN NOT NULL DEFAULT FALSE;
-- Transfer favorite status for user-owned ciphers.
UPDATE ciphers
SET favorite = TRUE
WHERE EXISTS (
SELECT * FROM favorites
WHERE favorites.user_uuid = ciphers.user_uuid
AND favorites.cipher_uuid = ciphers.uuid
);
DROP TABLE favorites;

View File

@ -0,0 +1,16 @@
CREATE TABLE favorites (
user_uuid VARCHAR(40) NOT NULL REFERENCES users(uuid),
cipher_uuid VARCHAR(40) NOT NULL REFERENCES ciphers(uuid),
PRIMARY KEY (user_uuid, cipher_uuid)
);
-- Transfer favorite status for user-owned ciphers.
INSERT INTO favorites(user_uuid, cipher_uuid)
SELECT user_uuid, uuid
FROM ciphers
WHERE favorite = TRUE
AND user_uuid IS NOT NULL;
ALTER TABLE ciphers
DROP COLUMN favorite;

View File

@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT true;

View File

@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN stamp_exception TEXT DEFAULT NULL;

View File

@ -0,0 +1,13 @@
ALTER TABLE ciphers
ADD COLUMN favorite BOOLEAN NOT NULL DEFAULT 0; -- FALSE
-- Transfer favorite status for user-owned ciphers.
UPDATE ciphers
SET favorite = 1
WHERE EXISTS (
SELECT * FROM favorites
WHERE favorites.user_uuid = ciphers.user_uuid
AND favorites.cipher_uuid = ciphers.uuid
);
DROP TABLE favorites;

View File

@ -0,0 +1,71 @@
CREATE TABLE favorites (
user_uuid TEXT NOT NULL REFERENCES users(uuid),
cipher_uuid TEXT NOT NULL REFERENCES ciphers(uuid),
PRIMARY KEY (user_uuid, cipher_uuid)
);
-- Transfer favorite status for user-owned ciphers.
INSERT INTO favorites(user_uuid, cipher_uuid)
SELECT user_uuid, uuid
FROM ciphers
WHERE favorite = 1
AND user_uuid IS NOT NULL;
-- Drop the `favorite` column from the `ciphers` table, using the 12-step
-- procedure from <https://www.sqlite.org/lang_altertable.html#altertabrename>.
-- Note that some steps aren't applicable and are omitted.
-- 1. If foreign key constraints are enabled, disable them using PRAGMA foreign_keys=OFF.
--
-- Diesel runs each migration in its own transaction. `PRAGMA foreign_keys`
-- is a no-op within a transaction, so this step must be done outside of this
-- file, before starting the Diesel migrations.
-- 2. Start a transaction.
--
-- Diesel already runs each migration in its own transaction.
-- 4. Use CREATE TABLE to construct a new table "new_X" that is in the
-- desired revised format of table X. Make sure that the name "new_X" does
-- not collide with any existing table name, of course.
CREATE TABLE new_ciphers(
uuid TEXT NOT NULL PRIMARY KEY,
created_at DATETIME NOT NULL,
updated_at DATETIME NOT NULL,
user_uuid TEXT REFERENCES users(uuid),
organization_uuid TEXT REFERENCES organizations(uuid),
atype INTEGER NOT NULL,
name TEXT NOT NULL,
notes TEXT,
fields TEXT,
data TEXT NOT NULL,
password_history TEXT,
deleted_at DATETIME
);
-- 5. Transfer content from X into new_X using a statement like:
-- INSERT INTO new_X SELECT ... FROM X.
INSERT INTO new_ciphers(uuid, created_at, updated_at, user_uuid, organization_uuid, atype,
name, notes, fields, data, password_history, deleted_at)
SELECT uuid, created_at, updated_at, user_uuid, organization_uuid, atype,
name, notes, fields, data, password_history, deleted_at
FROM ciphers;
-- 6. Drop the old table X: DROP TABLE X.
DROP TABLE ciphers;
-- 7. Change the name of new_X to X using: ALTER TABLE new_X RENAME TO X.
ALTER TABLE new_ciphers RENAME TO ciphers;
-- 11. Commit the transaction started in step 2.
-- 12. If foreign keys constraints were originally enabled, reenable them now.
--
-- `PRAGMA foreign_keys` is scoped to a database connection, and Diesel
-- migrations are run in a separate database connection that is closed once
-- the migrations finish.

View File

@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN enabled BOOLEAN NOT NULL DEFAULT 1;

View File

@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN stamp_exception TEXT DEFAULT NULL;

View File

@ -1 +1 @@
nightly-2020-07-11
nightly-2020-11-22

View File

@ -5,7 +5,7 @@ use std::process::Command;
use rocket::{
http::{Cookie, Cookies, SameSite},
request::{self, FlashMessage, Form, FromRequest, Request, Outcome},
request::{self, FlashMessage, Form, FromRequest, Outcome, Request},
response::{content::Html, Flash, Redirect},
Route,
};
@ -15,10 +15,10 @@ use crate::{
api::{ApiResult, EmptyResult, JsonResult},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, models::*, DbConn},
db::{backup_database, models::*, DbConn, DbConnType},
error::{Error, MapResult},
mail,
util::get_display_size,
util::{get_display_size, format_naive_datetime_local},
CONFIG,
};
@ -36,6 +36,8 @@ pub fn routes() -> Vec<Route> {
logout,
delete_user,
deauth_user,
disable_user,
enable_user,
remove_2fa,
update_revision_users,
post_config,
@ -48,8 +50,12 @@ pub fn routes() -> Vec<Route> {
]
}
static CAN_BACKUP: Lazy<bool> =
Lazy::new(|| cfg!(feature = "sqlite") && Command::new("sqlite3").arg("-version").status().is_ok());
static CAN_BACKUP: Lazy<bool> = Lazy::new(|| {
DbConnType::from_url(&CONFIG.database_url())
.map(|t| t == DbConnType::sqlite)
.unwrap_or(false)
&& Command::new("sqlite3").arg("-version").status().is_ok()
});
#[get("/")]
fn admin_disabled() -> &'static str {
@ -66,12 +72,35 @@ fn admin_path() -> String {
format!("{}{}", CONFIG.domain_path(), ADMIN_PATH)
}
struct Referer(Option<String>);
impl<'a, 'r> FromRequest<'a, 'r> for Referer {
type Error = ();
fn from_request(request: &'a Request<'r>) -> request::Outcome<Self, Self::Error> {
Outcome::Success(Referer(request.headers().get_one("Referer").map(str::to_string)))
}
}
/// Used for `Location` response headers, which must specify an absolute URI
/// (see https://tools.ietf.org/html/rfc2616#section-14.30).
fn admin_url() -> String {
// Don't use CONFIG.domain() directly, since the user may want to keep a
// trailing slash there, particularly when running under a subpath.
format!("{}{}{}", CONFIG.domain_origin(), CONFIG.domain_path(), ADMIN_PATH)
fn admin_url(referer: Referer) -> String {
// If we get a referer use that to make it work when, DOMAIN is not set
if let Some(mut referer) = referer.0 {
if let Some(start_index) = referer.find(ADMIN_PATH) {
referer.truncate(start_index + ADMIN_PATH.len());
return referer;
}
}
if CONFIG.domain_set() {
// Don't use CONFIG.domain() directly, since the user may want to keep a
// trailing slash there, particularly when running under a subpath.
format!("{}{}{}", CONFIG.domain_origin(), CONFIG.domain_path(), ADMIN_PATH)
} else {
// Last case, when no referer or domain set, technically invalid but better than nothing
ADMIN_PATH.to_string()
}
}
#[get("/", rank = 2)]
@ -91,14 +120,19 @@ struct LoginForm {
}
#[post("/", data = "<data>")]
fn post_admin_login(data: Form<LoginForm>, mut cookies: Cookies, ip: ClientIp) -> Result<Redirect, Flash<Redirect>> {
fn post_admin_login(
data: Form<LoginForm>,
mut cookies: Cookies,
ip: ClientIp,
referer: Referer,
) -> Result<Redirect, Flash<Redirect>> {
let data = data.into_inner();
// If the token is invalid, redirect to login page
if !_validate_token(&data.token) {
error!("Invalid admin token. IP: {}", ip.ip);
Err(Flash::error(
Redirect::to(admin_url()),
Redirect::to(admin_url(referer)),
"Invalid admin token, please try again.",
))
} else {
@ -114,7 +148,7 @@ fn post_admin_login(data: Form<LoginForm>, mut cookies: Cookies, ip: ClientIp) -
.finish();
cookies.add(cookie);
Ok(Redirect::to(admin_url()))
Ok(Redirect::to(admin_url(referer)))
}
}
@ -243,9 +277,9 @@ fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
}
#[get("/logout")]
fn logout(mut cookies: Cookies) -> Result<Redirect, ()> {
fn logout(mut cookies: Cookies, referer: Referer) -> Result<Redirect, ()> {
cookies.remove(Cookie::named(COOKIE_NAME));
Ok(Redirect::to(admin_url()))
Ok(Redirect::to(admin_url(referer)))
}
#[get("/users")]
@ -259,13 +293,20 @@ fn get_users_json(_token: AdminToken, conn: DbConn) -> JsonResult {
#[get("/users/overview")]
fn users_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let users = User::get_all(&conn);
let dt_fmt = "%Y-%m-%d %H:%M:%S %Z";
let users_json: Vec<Value> = users.iter()
.map(|u| {
let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32));
usr
.map(|u| {
let mut usr = u.to_json(&conn);
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &conn));
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &conn));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &conn) as i32));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, dt_fmt));
usr["last_active"] = match u.last_active(&conn) {
Some(dt) => json!(format_naive_datetime_local(&dt, dt_fmt)),
None => json!("Never")
};
usr
}).collect();
let text = AdminTemplateData::users(users_json).render()?;
@ -287,6 +328,24 @@ fn deauth_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
user.save(&conn)
}
#[post("/users/<uuid>/disable")]
fn disable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?;
Device::delete_all_by_user(&user.uuid, &conn)?;
user.reset_security_stamp();
user.enabled = false;
user.save(&conn)
}
#[post("/users/<uuid>/enable")]
fn enable_user(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?;
user.enabled = true;
user.save(&conn)
}
#[post("/users/<uuid>/remove-2fa")]
fn remove_2fa(uuid: String, _token: AdminToken, conn: DbConn) -> EmptyResult {
let mut user = User::find_by_uuid(&uuid, &conn).map_res("User doesn't exist")?;
@ -304,12 +363,12 @@ fn update_revision_users(_token: AdminToken, conn: DbConn) -> EmptyResult {
fn organizations_overview(_token: AdminToken, conn: DbConn) -> ApiResult<Html<String>> {
let organizations = Organization::get_all(&conn);
let organizations_json: Vec<Value> = organizations.iter().map(|o| {
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org
let mut org = o.to_json();
org["user_count"] = json!(UserOrganization::count_by_org(&o.uuid, &conn));
org["cipher_count"] = json!(Cipher::count_by_org(&o.uuid, &conn));
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &conn));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &conn) as i32));
org
}).collect();
let text = AdminTemplateData::organizations(organizations_json).render()?;
@ -373,7 +432,7 @@ fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
Ok(mut c) => {
c.sha.truncate(8);
c.sha
},
},
_ => "-".to_string()
},
match get_github_api::<GitRelease>("https://api.github.com/repos/dani-garcia/bw_web_builds/releases/latest") {
@ -384,11 +443,11 @@ fn diagnostics(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
} else {
("-".to_string(), "-".to_string(), "-".to_string())
};
// Run the date check as the last item right before filling the json.
// This should ensure that the time difference between the browser and the server is as minimal as possible.
let dt = Utc::now();
let server_time = dt.format("%Y-%m-%d %H:%M:%S").to_string();
let server_time = dt.format("%Y-%m-%d %H:%M:%S UTC").to_string();
let diagnostics_json = json!({
"dns_resolved": dns_resolved,

View File

@ -32,6 +32,7 @@ pub fn routes() -> Vec<rocket::Route> {
revision_date,
password_hint,
prelogin,
verify_password,
]
}
@ -114,7 +115,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
user.client_kdf_type = client_kdf_type;
}
user.set_password(&data.MasterPasswordHash);
user.set_password(&data.MasterPasswordHash, None);
user.akey = data.Key;
// Add extra fields if present
@ -231,7 +232,7 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
err!("Invalid password")
}
user.set_password(&data.NewMasterPasswordHash);
user.set_password(&data.NewMasterPasswordHash, Some("post_rotatekey"));
user.akey = data.Key;
user.save(&conn)
}
@ -258,7 +259,7 @@ fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, conn: DbConn) ->
user.client_kdf_iter = data.KdfIterations;
user.client_kdf_type = data.Kdf;
user.set_password(&data.NewMasterPasswordHash);
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn)
}
@ -337,6 +338,7 @@ fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, conn: DbConn, nt:
user.akey = data.Key;
user.private_key = Some(data.PrivateKey);
user.reset_security_stamp();
user.reset_stamp_exception();
user.save(&conn)
}
@ -444,7 +446,7 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
user.email_new = None;
user.email_new_token = None;
user.set_password(&data.NewMasterPasswordHash);
user.set_password(&data.NewMasterPasswordHash, None);
user.akey = data.Key;
user.save(&conn)
@ -459,7 +461,7 @@ fn post_verify_email(headers: Headers, _conn: DbConn) -> EmptyResult {
}
if let Err(e) = mail::send_verify_email(&user.email, &user.uuid) {
error!("Error sending delete account email: {:#?}", e);
error!("Error sending verify_email email: {:#?}", e);
}
Ok(())
@ -623,3 +625,20 @@ fn prelogin(data: JsonUpcase<PreloginData>, conn: DbConn) -> JsonResult {
"KdfIterations": kdf_iter
})))
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct VerifyPasswordData {
MasterPasswordHash: String,
}
#[post("/accounts/verify-password", data = "<data>")]
fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers, _conn: DbConn) -> EmptyResult {
let data: VerifyPasswordData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
Ok(())
}

View File

@ -1,6 +1,7 @@
use std::collections::{HashMap, HashSet};
use std::path::Path;
use chrono::{NaiveDateTime, Utc};
use rocket::{http::ContentType, request::Form, Data, Route};
use rocket_contrib::json::Json;
use serde_json::Value;
@ -17,6 +18,16 @@ use crate::{
};
pub fn routes() -> Vec<Route> {
// Note that many routes have an `admin` variant; this seems to be
// because the stored procedure that upstream Bitwarden uses to determine
// whether the user can edit a cipher doesn't take into account whether
// the user is an org owner/admin. The `admin` variant first checks
// whether the user is an owner/admin of the relevant org, and if so,
// allows the operation unconditionally.
//
// bitwarden_rs factors in the org owner/admin status as part of
// determining the write accessibility of a cipher, so most
// admin/non-admin implementations can be shared.
routes![
sync,
get_ciphers,
@ -38,7 +49,7 @@ pub fn routes() -> Vec<Route> {
post_cipher_admin,
post_cipher_share,
put_cipher_share,
put_cipher_share_seleted,
put_cipher_share_selected,
post_cipher,
put_cipher,
delete_cipher_post,
@ -50,6 +61,9 @@ pub fn routes() -> Vec<Route> {
delete_cipher_selected,
delete_cipher_selected_post,
delete_cipher_selected_put,
delete_cipher_selected_admin,
delete_cipher_selected_post_admin,
delete_cipher_selected_put_admin,
restore_cipher_put,
restore_cipher_put_admin,
restore_cipher_selected,
@ -82,7 +96,7 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult {
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
let ciphers = Cipher::find_by_user(&headers.user.uuid, &conn);
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
let ciphers_json: Vec<Value> = ciphers
.iter()
.map(|c| c.to_json(&headers.host, &headers.user.uuid, &conn))
@ -107,7 +121,7 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> JsonResult {
#[get("/ciphers")]
fn get_ciphers(headers: Headers, conn: DbConn) -> JsonResult {
let ciphers = Cipher::find_by_user(&headers.user.uuid, &conn);
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
let ciphers_json: Vec<Value> = ciphers
.iter()
@ -181,6 +195,14 @@ pub struct CipherData {
#[serde(rename = "Attachments")]
_Attachments: Option<Value>, // Unused, contains map of {id: filename}
Attachments2: Option<HashMap<String, Attachments2Data>>,
// The revision datetime (in ISO 8601 format) of the client's local copy
// of the cipher. This is used to prevent a client from updating a cipher
// when it doesn't have the latest version, as that can result in data
// loss. It's not an error when no value is provided; this can happen
// when using older client versions, or if the operation doesn't involve
// updating an existing cipher.
LastKnownRevisionDate: Option<String>,
}
#[derive(Deserialize, Debug)]
@ -190,22 +212,35 @@ pub struct Attachments2Data {
Key: String,
}
/// Called when an org admin clones an org cipher.
#[post("/ciphers/admin", data = "<data>")]
fn post_ciphers_admin(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let data: ShareCipherData = data.into_inner().data;
post_ciphers_create(data, headers, conn, nt)
}
/// Called when creating a new org-owned cipher, or cloning a cipher (whether
/// user- or org-owned). When cloning a cipher to a user-owned cipher,
/// `organizationId` is null.
#[post("/ciphers/create", data = "<data>")]
fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let mut data: ShareCipherData = data.into_inner().data;
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
cipher.user_uuid = Some(headers.user.uuid.clone());
cipher.save(&conn)?;
// When cloning a cipher, the Bitwarden clients seem to set this field
// based on the cipher being cloned (when creating a new cipher, it's set
// to null as expected). However, `cipher.created_at` is initialized to
// the current time, so the stale data check will end up failing down the
// line. Since this function only creates new ciphers (whether by cloning
// or otherwise), we can just ignore this field entirely.
data.Cipher.LastKnownRevisionDate = None;
share_cipher_by_uuid(&cipher.uuid, data, &headers, &conn, &nt)
}
#[post("/ciphers/create", data = "<data>")]
fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
post_ciphers_admin(data, headers, conn, nt)
}
/// Called when creating a new user-owned cipher.
#[post("/ciphers", data = "<data>")]
fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
let data: CipherData = data.into_inner().data;
@ -225,6 +260,17 @@ pub fn update_cipher_from_data(
nt: &Notify,
ut: UpdateType,
) -> EmptyResult {
// Check that the client isn't updating an existing cipher with stale data.
if let Some(dt) = data.LastKnownRevisionDate {
match NaiveDateTime::parse_from_str(&dt, "%+") { // ISO 8601 format
Err(err) =>
warn!("Error parsing LastKnownRevisionDate '{}': {}", dt, err),
Ok(dt) if cipher.updated_at.signed_duration_since(dt).num_seconds() > 1 =>
err!("The client copy of this cipher is out of date. Resync the client and try again."),
Ok(_) => (),
}
}
if cipher.organization_uuid.is_some() && cipher.organization_uuid != data.OrganizationId {
err!("Organization mismatch. Please resync the client before updating the cipher")
}
@ -303,7 +349,6 @@ pub fn update_cipher_from_data(
type_data["PasswordHistory"] = data.PasswordHistory.clone().unwrap_or(Value::Null);
// TODO: ******* Backwards compat end **********
cipher.favorite = data.Favorite.unwrap_or(false);
cipher.name = data.Name;
cipher.notes = data.Notes;
cipher.fields = data.Fields.map(|f| f.to_string());
@ -312,6 +357,7 @@ pub fn update_cipher_from_data(
cipher.save(&conn)?;
cipher.move_to_folder(data.FolderId, &headers.user.uuid, &conn)?;
cipher.set_favorite(data.Favorite, &headers.user.uuid, &conn)?;
if ut != UpdateType::None {
nt.send_cipher_update(ut, &cipher, &cipher.update_users_revision(&conn));
@ -374,6 +420,7 @@ fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbC
Ok(())
}
/// Called when an org admin modifies an existing org cipher.
#[put("/ciphers/<uuid>/admin", data = "<data>")]
fn put_cipher_admin(
uuid: String,
@ -410,6 +457,11 @@ fn put_cipher(uuid: String, data: JsonUpcase<CipherData>, headers: Headers, conn
None => err!("Cipher doesn't exist"),
};
// TODO: Check if only the folder ID or favorite status is being changed.
// These are per-user properties that technically aren't part of the
// cipher itself, so the user shouldn't need write access to change these.
// Interestingly, upstream Bitwarden doesn't properly handle this either.
if !cipher.is_write_accessible_to_user(&headers.user.uuid, &conn) {
err!("Cipher is not write accessible")
}
@ -543,7 +595,7 @@ struct ShareSelectedCipherData {
}
#[put("/ciphers/share", data = "<data>")]
fn put_cipher_share_seleted(
fn put_cipher_share_selected(
data: JsonUpcase<ShareSelectedCipherData>,
headers: Headers,
conn: DbConn,
@ -857,7 +909,22 @@ fn delete_cipher_selected_post(data: JsonUpcase<Value>, headers: Headers, conn:
#[put("/ciphers/delete", data = "<data>")]
fn delete_cipher_selected_put(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
_delete_multiple_ciphers(data, headers, conn, true, nt)
_delete_multiple_ciphers(data, headers, conn, true, nt) // soft delete
}
#[delete("/ciphers/admin", data = "<data>")]
fn delete_cipher_selected_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected(data, headers, conn, nt)
}
#[post("/ciphers/delete-admin", data = "<data>")]
fn delete_cipher_selected_post_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected_post(data, headers, conn, nt)
}
#[put("/ciphers/delete-admin", data = "<data>")]
fn delete_cipher_selected_put_admin(data: JsonUpcase<Value>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
delete_cipher_selected_put(data, headers, conn, nt)
}
#[put("/ciphers/<uuid>/restore")]
@ -997,13 +1064,14 @@ fn _delete_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &DbConn, soft_del
}
if soft_delete {
cipher.deleted_at = Some(chrono::Utc::now().naive_utc());
cipher.deleted_at = Some(Utc::now().naive_utc());
cipher.save(&conn)?;
nt.send_cipher_update(UpdateType::CipherUpdate, &cipher, &cipher.update_users_revision(&conn));
} else {
cipher.delete(&conn)?;
nt.send_cipher_update(UpdateType::CipherDelete, &cipher, &cipher.update_users_revision(&conn));
}
nt.send_cipher_update(UpdateType::CipherDelete, &cipher, &cipher.update_users_revision(&conn));
Ok(())
}

View File

@ -5,7 +5,7 @@ use serde_json::Value;
use crate::{
api::{EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, Notify, NumberOrString, PasswordData, UpdateType},
auth::{decode_invite, AdminHeaders, Headers, OwnerHeaders},
auth::{decode_invite, AdminHeaders, Headers, OwnerHeaders, ManagerHeaders, ManagerHeadersLoose},
db::{models::*, DbConn},
mail, CONFIG,
};
@ -47,6 +47,7 @@ pub fn routes() -> Vec<Route> {
list_policies_token,
get_policy,
put_policy,
get_plans,
]
}
@ -76,6 +77,10 @@ struct NewCollectionData {
#[post("/organizations", data = "<data>")]
fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, conn: DbConn) -> JsonResult {
if !CONFIG.is_org_creation_allowed(&headers.user.email) {
err!("User not allowed to create organizations")
}
let data: OrgData = data.into_inner().data;
let org = Organization::new(data.Name, data.BillingEmail);
@ -212,7 +217,7 @@ fn get_org_collections(org_id: String, _headers: AdminHeaders, conn: DbConn) ->
#[post("/organizations/<org_id>/collections", data = "<data>")]
fn post_organization_collections(
org_id: String,
_headers: AdminHeaders,
headers: ManagerHeadersLoose,
data: JsonUpcase<NewCollectionData>,
conn: DbConn,
) -> JsonResult {
@ -223,9 +228,22 @@ fn post_organization_collections(
None => err!("Can't find organization details"),
};
// Get the user_organization record so that we can check if the user has access to all collections.
let user_org = match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &conn) {
Some(u) => u,
None => err!("User is not part of organization"),
};
let collection = Collection::new(org.uuid, data.Name);
collection.save(&conn)?;
// If the user doesn't have access to all collections, only in case of a Manger,
// then we need to save the creating user uuid (Manager) to the users_collection table.
// Else the user will not have access to his own created collection.
if !user_org.access_all {
CollectionUser::save(&headers.user.uuid, &collection.uuid, false, false, &conn)?;
}
Ok(Json(collection.to_json()))
}
@ -233,7 +251,7 @@ fn post_organization_collections(
fn put_organization_collection_update(
org_id: String,
col_id: String,
headers: AdminHeaders,
headers: ManagerHeaders,
data: JsonUpcase<NewCollectionData>,
conn: DbConn,
) -> JsonResult {
@ -244,7 +262,7 @@ fn put_organization_collection_update(
fn post_organization_collection_update(
org_id: String,
col_id: String,
_headers: AdminHeaders,
_headers: ManagerHeaders,
data: JsonUpcase<NewCollectionData>,
conn: DbConn,
) -> JsonResult {
@ -312,7 +330,7 @@ fn post_organization_collection_delete_user(
}
#[delete("/organizations/<org_id>/collections/<col_id>")]
fn delete_organization_collection(org_id: String, col_id: String, _headers: AdminHeaders, conn: DbConn) -> EmptyResult {
fn delete_organization_collection(org_id: String, col_id: String, _headers: ManagerHeaders, conn: DbConn) -> EmptyResult {
match Collection::find_by_uuid(&col_id, &conn) {
None => err!("Collection not found"),
Some(collection) => {
@ -336,7 +354,7 @@ struct DeleteCollectionData {
fn post_organization_collection_delete(
org_id: String,
col_id: String,
headers: AdminHeaders,
headers: ManagerHeaders,
_data: JsonUpcase<DeleteCollectionData>,
conn: DbConn,
) -> EmptyResult {
@ -344,7 +362,7 @@ fn post_organization_collection_delete(
}
#[get("/organizations/<org_id>/collections/<coll_id>/details")]
fn get_org_collection_detail(org_id: String, coll_id: String, headers: AdminHeaders, conn: DbConn) -> JsonResult {
fn get_org_collection_detail(org_id: String, coll_id: String, headers: ManagerHeaders, conn: DbConn) -> JsonResult {
match Collection::find_by_uuid_and_user(&coll_id, &headers.user.uuid, &conn) {
None => err!("Collection not found"),
Some(collection) => {
@ -358,7 +376,7 @@ fn get_org_collection_detail(org_id: String, coll_id: String, headers: AdminHead
}
#[get("/organizations/<org_id>/collections/<coll_id>/users")]
fn get_collection_users(org_id: String, coll_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
fn get_collection_users(org_id: String, coll_id: String, _headers: ManagerHeaders, conn: DbConn) -> JsonResult {
// Get org and collection, check that collection is from org
let collection = match Collection::find_by_uuid_and_org(&coll_id, &org_id, &conn) {
None => err!("Collection not found in Organization"),
@ -383,7 +401,7 @@ fn put_collection_users(
org_id: String,
coll_id: String,
data: JsonUpcaseVec<CollectionData>,
_headers: AdminHeaders,
_headers: ManagerHeaders,
conn: DbConn,
) -> EmptyResult {
// Get org and collection, check that collection is from org
@ -435,7 +453,7 @@ fn get_org_details(data: Form<OrgIdData>, headers: Headers, conn: DbConn) -> Jso
}
#[get("/organizations/<org_id>/users")]
fn get_org_users(org_id: String, _headers: AdminHeaders, conn: DbConn) -> JsonResult {
fn get_org_users(org_id: String, _headers: ManagerHeadersLoose, conn: DbConn) -> JsonResult {
let users = UserOrganization::find_by_org(&org_id, &conn);
let users_json: Vec<Value> = users.iter().map(|c| c.to_json_user_details(&conn)).collect();
@ -987,3 +1005,55 @@ fn put_policy(org_id: String, pol_type: i32, data: Json<PolicyData>, _headers: A
Ok(Json(policy.to_json()))
}
#[get("/plans")]
fn get_plans(_headers: Headers, _conn: DbConn) -> JsonResult {
Ok(Json(json!({
"Object": "list",
"Data": [
{
"Object": "plan",
"Type": 0,
"Product": 0,
"Name": "Free",
"IsAnnual": false,
"NameLocalizationKey": "planNameFree",
"DescriptionLocalizationKey": "planDescFree",
"CanBeUsedByBusiness": false,
"BaseSeats": 2,
"BaseStorageGb": null,
"MaxCollections": 2,
"MaxUsers": 2,
"HasAdditionalSeatsOption": false,
"MaxAdditionalSeats": null,
"HasAdditionalStorageOption": false,
"MaxAdditionalStorage": null,
"HasPremiumAccessOption": false,
"TrialPeriodDays": null,
"HasSelfHost": false,
"HasPolicies": false,
"HasGroups": false,
"HasDirectory": false,
"HasEvents": false,
"HasTotp": false,
"Has2fa": false,
"HasApi": false,
"HasSso": false,
"UsersGetPremium": false,
"UpgradeSortOrder": -1,
"DisplaySortOrder": -1,
"LegacyYear": null,
"Disabled": false,
"StripePlanId": null,
"StripeSeatPlanId": null,
"StripeStoragePlanId": null,
"StripePremiumAccessPlanId": null,
"BasePrice": 0.0,
"SeatPrice": 0.0,
"AdditionalStoragePricePerGb": 0.0,
"PremiumAccessOptionPrice": 0.0
}
],
"ContinuationToken": null
})))
}

View File

@ -1,13 +1,15 @@
use std::{
collections::HashMap,
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::prelude::*,
net::{IpAddr, ToSocketAddrs},
sync::RwLock,
time::{Duration, SystemTime},
};
use once_cell::sync::Lazy;
use regex::Regex;
use reqwest::{blocking::Client, blocking::Response, header::HeaderMap, Url};
use reqwest::{blocking::Client, blocking::Response, header, Url};
use rocket::{http::ContentType, http::Cookie, response::Content, Route};
use soup::prelude::*;
@ -17,33 +19,67 @@ pub fn routes() -> Vec<Route> {
routes![icon]
}
const FALLBACK_ICON: &[u8; 344] = include_bytes!("../static/fallback-icon.png");
const ALLOWED_CHARS: &str = "_-.";
static CLIENT: Lazy<Client> = Lazy::new(|| {
// Generate the default headers
let mut default_headers = header::HeaderMap::new();
default_headers.insert(header::USER_AGENT, header::HeaderValue::from_static("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15"));
default_headers.insert(header::ACCEPT_LANGUAGE, header::HeaderValue::from_static("en-US,en;q=0.8"));
default_headers.insert(header::CACHE_CONTROL, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::PRAGMA, header::HeaderValue::from_static("no-cache"));
default_headers.insert(header::ACCEPT, header::HeaderValue::from_static("text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8"));
// Reuse the client between requests
Client::builder()
.timeout(Duration::from_secs(CONFIG.icon_download_timeout()))
.default_headers(_header_map())
.default_headers(default_headers)
.build()
.unwrap()
});
static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"icon$|apple.*icon").unwrap());
static ICON_HREF_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(?i)\w+\.(jpg|jpeg|png|ico)(\?.*)?$|^data:image.*base64").unwrap());
// Build Regex only once since this takes a lot of time.
static ICON_REL_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?i)icon$|apple.*icon").unwrap());
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<RwLock<HashMap<String, Regex>>> = Lazy::new(|| RwLock::new(HashMap::new()));
#[get("/<domain>/icon.png")]
fn icon(domain: String) -> Option<Cached<Content<Vec<u8>>>> {
if !is_valid_domain(&domain) {
warn!("Invalid domain: {}", domain);
return None;
}
get_icon(&domain).map(|icon| Cached::long(Content(ContentType::new("image", "x-icon"), icon)))
}
/// Returns if the domain provided is valid or not.
///
/// This does some manual checks and makes use of Url to do some basic checking.
/// domains can't be larger then 63 characters (not counting multiple subdomains) according to the RFC's, but we limit the total size to 255.
fn is_valid_domain(domain: &str) -> bool {
// Don't allow empty or too big domains or path traversal
if domain.is_empty() || domain.len() > 255 || domain.contains("..") {
// If parsing the domain fails using Url, it will not work with reqwest.
if let Err(parse_error) = Url::parse(format!("https://{}", domain).as_str()) {
debug!("Domain parse error: '{}' - {:?}", domain, parse_error);
return false;
} else if domain.is_empty()
|| domain.contains("..")
|| domain.starts_with('.')
|| domain.starts_with('-')
|| domain.ends_with('-')
{
debug!("Domain validation error: '{}' is either empty, contains '..', starts with an '.', starts or ends with a '-'", domain);
return false;
} else if domain.len() > 255 {
debug!("Domain validation error: '{}' exceeds 255 characters", domain);
return false;
}
// Only alphanumeric or specific characters
for c in domain.chars() {
if !c.is_alphanumeric() && !ALLOWED_CHARS.contains(c) {
debug!("Domain validation error: '{}' contains an invalid character '{}'", domain, c);
return false;
}
}
@ -51,21 +87,10 @@ fn is_valid_domain(domain: &str) -> bool {
true
}
#[get("/<domain>/icon.png")]
fn icon(domain: String) -> Cached<Content<Vec<u8>>> {
let icon_type = ContentType::new("image", "x-icon");
if !is_valid_domain(&domain) {
warn!("Invalid domain: {:#?}", domain);
return Cached::long(Content(icon_type, FALLBACK_ICON.to_vec()));
}
Cached::long(Content(icon_type, get_icon(&domain)))
}
/// TODO: This is extracted from IpAddr::is_global, which is unstable:
/// https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global
/// Remove once https://github.com/rust-lang/rust/issues/27709 is merged
#[allow(clippy::nonminimal_bool)]
#[cfg(not(feature = "unstable"))]
fn is_global(ip: IpAddr) -> bool {
match ip {
@ -161,7 +186,7 @@ mod tests {
}
}
fn check_icon_domain_is_blacklisted(domain: &str) -> bool {
fn is_domain_blacklisted(domain: &str) -> bool {
let mut is_blacklisted = CONFIG.icon_blacklist_non_global_ips()
&& (domain, 0)
.to_socket_addrs()
@ -179,7 +204,31 @@ fn check_icon_domain_is_blacklisted(domain: &str) -> bool {
// Skip the regex check if the previous one is true already
if !is_blacklisted {
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
let regex = Regex::new(&blacklist).expect("Valid Regex");
let mut regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let regex = if let Some(regex) = regex_hashmap.get(&blacklist) {
regex
} else {
drop(regex_hashmap);
let mut regex_hashmap_write = ICON_BLACKLIST_REGEX.write().unwrap();
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if regex_hashmap_write.len() >= 1 {
regex_hashmap_write.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
regex_hashmap_write.insert(blacklist.to_string(), blacklist_regex);
drop(regex_hashmap_write);
regex_hashmap = ICON_BLACKLIST_REGEX.read().unwrap();
regex_hashmap.get(&blacklist).unwrap()
};
// Use the pre-generate Regex stored in a Lazy HashMap.
if regex.is_match(&domain) {
warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist);
is_blacklisted = true;
@ -190,39 +239,38 @@ fn check_icon_domain_is_blacklisted(domain: &str) -> bool {
is_blacklisted
}
fn get_icon(domain: &str) -> Vec<u8> {
fn get_icon(domain: &str) -> Option<Vec<u8>> {
let path = format!("{}/{}.png", CONFIG.icon_cache_folder(), domain);
// Check for expiration of negatively cached copy
if icon_is_negcached(&path) {
return None;
}
if let Some(icon) = get_cached_icon(&path) {
return icon;
return Some(icon);
}
if CONFIG.disable_icon_download() {
return FALLBACK_ICON.to_vec();
return None;
}
// Get the icon, or fallback in case of error
// Get the icon, or None in case of error
match download_icon(&domain) {
Ok(icon) => {
save_icon(&path, &icon);
icon
Some(icon)
}
Err(e) => {
error!("Error downloading icon: {:?}", e);
let miss_indicator = path + ".miss";
let empty_icon = Vec::new();
save_icon(&miss_indicator, &empty_icon);
FALLBACK_ICON.to_vec()
save_icon(&miss_indicator, &[]);
None
}
}
}
fn get_cached_icon(path: &str) -> Option<Vec<u8>> {
// Check for expiration of negatively cached copy
if icon_is_negcached(path) {
return Some(FALLBACK_ICON.to_vec());
}
// Check for expiration of successfully cached copy
if icon_is_expired(path) {
return None;
@ -284,6 +332,12 @@ impl Icon {
}
}
struct IconUrlResult {
iconlist: Vec<Icon>,
cookies: String,
referer: String,
}
/// Returns a Result/Tuple which holds a Vector IconList and a string which holds the cookies from the last response.
/// There will always be a result with a string which will contain https://example.com/favicon.ico and an empty string for the cookies.
/// This does not mean that that location does exists, but it is the default location browser use.
@ -296,24 +350,65 @@ impl Icon {
/// let (mut iconlist, cookie_str) = get_icon_url("github.com")?;
/// let (mut iconlist, cookie_str) = get_icon_url("gitlab.com")?;
/// ```
fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
// Default URL with secure and insecure schemes
let ssldomain = format!("https://{}", domain);
let httpdomain = format!("http://{}", domain);
// First check the domain as given during the request for both HTTPS and HTTP.
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)) {
Ok(c) => Ok(c),
Err(e) => {
let mut sub_resp = Err(e);
// When the domain is not an IP, and has more then one dot, remove all subdomains.
let is_ip = domain.parse::<IpAddr>();
if is_ip.is_err() && domain.matches('.').count() > 1 {
let mut domain_parts = domain.split('.');
let base_domain = format!(
"{base}.{tld}",
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{}", base_domain);
let httpbase = format!("http://{}", base_domain);
debug!("[get_icon_url]: Trying without subdomains '{}'", base_domain);
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase));
}
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{}", domain);
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{}", www_domain);
let httpwww = format!("http://{}", www_domain);
debug!("[get_icon_url]: Trying with www. prefix '{}'", www_domain);
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww));
}
}
sub_resp
}
};
// Create the iconlist
let mut iconlist: Vec<Icon> = Vec::new();
// Create the cookie_str to fill it all the cookies from the response
// These cookies can be used to request/download the favicon image.
// Some sites have extra security in place with for example XSRF Tokens.
let mut cookie_str = String::new();
let mut cookie_str = "".to_string();
let mut referer = "".to_string();
let resp = get_page(&ssldomain).or_else(|_| get_page(&httpdomain));
if let Ok(content) = resp {
// Extract the URL from the respose in case redirects occured (like @ gitlab.com)
let url = content.url().clone();
// Get all the cookies and pass it on to the next function.
// Needed for XSRF Cookies for example (like @ mijn.ing.nl)
let raw_cookies = content.headers().get_all("set-cookie");
cookie_str = raw_cookies
.iter()
@ -327,6 +422,10 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
})
.collect::<String>();
// Set the referer to be used on the final request, some sites check this.
// Mostly used to prevent direct linking and other security resons.
referer = url.as_str().to_string();
// Add the default favicon.ico to the list with the domain the content responded from.
iconlist.push(Icon::new(35, url.join("/favicon.ico").unwrap().into_string()));
@ -339,14 +438,18 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
let favicons = soup
.tag("link")
.attr("rel", ICON_REL_REGEX.clone()) // Only use icon rels
.attr("href", ICON_HREF_REGEX.clone()) // Only allow specific extensions
.attr_name("href") // Make sure there is a href
.find_all();
// Loop through all the found icons and determine it's priority
for favicon in favicons {
let sizes = favicon.get("sizes");
let href = favicon.get("href").expect("Missing href");
let full_href = url.join(&href).unwrap().into_string();
let href = favicon.get("href").unwrap();
// Skip invalid url's
let full_href = match url.join(&href) {
Ok(h) => h.into_string(),
_ => continue,
};
let priority = get_icon_priority(&full_href, sizes);
@ -362,28 +465,33 @@ fn get_icon_url(domain: &str) -> Result<(Vec<Icon>, String), Error> {
iconlist.sort_by_key(|x| x.priority);
// There always is an icon in the list, so no need to check if it exists, and just return the first one
Ok((iconlist, cookie_str))
Ok(IconUrlResult{
iconlist,
cookies: cookie_str,
referer
})
}
fn get_page(url: &str) -> Result<Response, Error> {
get_page_with_cookies(url, "")
get_page_with_cookies(url, "", "")
}
fn get_page_with_cookies(url: &str, cookie_str: &str) -> Result<Response, Error> {
if check_icon_domain_is_blacklisted(Url::parse(url).unwrap().host_str().unwrap_or_default()) {
err!("Favicon rel linked to a non blacklisted domain!");
fn get_page_with_cookies(url: &str, cookie_str: &str, referer: &str) -> Result<Response, Error> {
if is_domain_blacklisted(Url::parse(url).unwrap().host_str().unwrap_or_default()) {
err!("Favicon rel linked to a blacklisted domain!");
}
if cookie_str.is_empty() {
CLIENT.get(url).send()?.error_for_status().map_err(Into::into)
} else {
CLIENT
.get(url)
.header("cookie", cookie_str)
.send()?
.error_for_status()
.map_err(Into::into)
let mut client = CLIENT.get(url);
if !cookie_str.is_empty() {
client = client.header("Cookie", cookie_str)
}
if !referer.is_empty() {
client = client.header("Referer", referer)
}
client.send()?
.error_for_status()
.map_err(Into::into)
}
/// Returns a Integer with the priority of the type of the icon which to prefer.
@ -411,7 +519,7 @@ fn get_icon_priority(href: &str, sizes: Option<String>) -> u8 {
1
} else if width == 64 {
2
} else if width >= 24 && width <= 128 {
} else if (24..=128).contains(&width) {
3
} else if width == 16 {
4
@ -466,17 +574,17 @@ fn parse_sizes(sizes: Option<String>) -> (u16, u16) {
}
fn download_icon(domain: &str) -> Result<Vec<u8>, Error> {
if check_icon_domain_is_blacklisted(domain) {
if is_domain_blacklisted(domain) {
err!("Domain is blacklisted", domain)
}
let (iconlist, cookie_str) = get_icon_url(&domain)?;
let icon_result = get_icon_url(&domain)?;
let mut buffer = Vec::new();
use data_url::DataUrl;
for icon in iconlist.iter().take(5) {
for icon in icon_result.iconlist.iter().take(5) {
if icon.href.starts_with("data:image") {
let datauri = DataUrl::process(&icon.href).unwrap();
// Check if we are able to decode the data uri
@ -491,13 +599,13 @@ fn download_icon(domain: &str) -> Result<Vec<u8>, Error> {
_ => warn!("data uri is invalid"),
};
} else {
match get_page_with_cookies(&icon.href, &cookie_str) {
match get_page_with_cookies(&icon.href, &icon_result.cookies, &icon_result.referer) {
Ok(mut res) => {
info!("Downloaded icon from {}", icon.href);
res.copy_to(&mut buffer)?;
break;
}
Err(_) => info!("Download failed for {}", icon.href),
},
_ => warn!("Download failed for {}", icon.href),
};
}
}
@ -522,25 +630,3 @@ fn save_icon(path: &str, icon: &[u8]) {
}
}
}
fn _header_map() -> HeaderMap {
// Set some default headers for the request.
// Use a browser like user-agent to make sure most websites will return there correct website.
use reqwest::header::*;
macro_rules! headers {
($( $name:ident : $value:literal),+ $(,)? ) => {
let mut headers = HeaderMap::new();
$( headers.insert($name, HeaderValue::from_static($value)); )+
headers
};
}
headers! {
USER_AGENT: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299",
ACCEPT_LANGUAGE: "en-US,en;q=0.8",
CACHE_CONTROL: "no-cache",
PRAGMA: "no-cache",
ACCEPT: "text/html,application/xhtml+xml,application/xml; q=0.9,image/webp,image/apng,*/*;q=0.8",
}
}

View File

@ -68,6 +68,11 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
"refresh_token": device.refresh_token,
"Key": user.akey,
"PrivateKey": user.private_key,
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false, // TODO: according to official server seems something like: user.password_hash.is_empty(), but would need testing
"scope": "api offline_access"
})))
}
@ -97,6 +102,14 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
)
}
// Check if the user is disabled
if !user.enabled {
err!(
"This user has been disabled",
format!("IP: {}. Username: {}.", ip.ip, username)
)
}
let now = Local::now();
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
@ -142,7 +155,6 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
}
// Common
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
@ -156,6 +168,11 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
"Key": user.akey,
"PrivateKey": user.private_key,
//"TwoFactorToken": "11122233333444555666777888999"
"Kdf": user.client_kdf_type,
"KdfIterations": user.client_kdf_iter,
"ResetMasterPassword": false,// TODO: Same as above
"scope": "api offline_access"
});
if let Some(token) = twofactor_token {

View File

@ -20,10 +20,12 @@ static SHOW_WEBSOCKETS_MSG: AtomicBool = AtomicBool::new(true);
#[get("/hub")]
fn websockets_err() -> EmptyResult {
if CONFIG.websocket_enabled() && SHOW_WEBSOCKETS_MSG.compare_and_swap(true, false, Ordering::Relaxed) {
err!("###########################################################
err!(
"###########################################################
'/notifications/hub' should be proxied to the websocket server or notifications won't work.
Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false.
###########################################################################################")
###########################################################################################"
)
} else {
Err(Error::empty())
}
@ -137,7 +139,6 @@ struct InitialMessage {
const PING_MS: u64 = 15_000;
const PING: Token = Token(1);
const ID_KEY: &str = "id=";
const ACCESS_TOKEN_KEY: &str = "access_token=";
impl WSHandler {
@ -148,6 +149,32 @@ impl WSHandler {
let io_error = io::Error::from(io::ErrorKind::InvalidData);
Err(ws::Error::new(ws::ErrorKind::Io(io_error), msg))
}
fn get_request_token(&self, hs: Handshake) -> Option<String> {
use std::str::from_utf8;
// Verify we have a token header
if let Some(header_value) = hs.request.header("Authorization") {
if let Ok(converted) = from_utf8(header_value) {
if let Some(token_part) = converted.split("Bearer ").nth(1) {
return Some(token_part.into());
}
}
};
// Otherwise verify the query parameter value
let path = hs.request.resource();
if let Some(params) = path.split('?').nth(1) {
let params_iter = params.split('&').take(1);
for val in params_iter {
if val.starts_with(ACCESS_TOKEN_KEY) {
return Some(val[ACCESS_TOKEN_KEY.len()..].into());
}
}
};
None
}
}
impl Handler for WSHandler {
@ -156,35 +183,16 @@ impl Handler for WSHandler {
//
// We don't use `id`, and as of around 2020-03-25, the official clients
// no longer seem to pass `id` (only `access_token`).
let path = hs.request.resource();
let (_id, access_token) = match path.split('?').nth(1) {
Some(params) => {
let params_iter = params.split('&').take(2);
let mut id = None;
let mut access_token = None;
for val in params_iter {
if val.starts_with(ID_KEY) {
id = Some(&val[ID_KEY.len()..]);
} else if val.starts_with(ACCESS_TOKEN_KEY) {
access_token = Some(&val[ACCESS_TOKEN_KEY.len()..]);
}
}
match (id, access_token) {
(Some(a), Some(b)) => (a, b),
(None, Some(b)) => ("", b), // Ignore missing `id`.
_ => return self.err("Missing access token"),
}
}
None => return self.err("Missing query parameters"),
// Get user token from header or query parameter
let access_token = match self.get_request_token(hs) {
Some(token) => token,
_ => return self.err("Missing access token"),
};
// Validate the user
use crate::auth;
let claims = match auth::decode_login(access_token) {
let claims = match auth::decode_login(access_token.as_str()) {
Ok(claims) => claims,
Err(_) => return self.err("Invalid access token provided"),
};
@ -335,7 +343,7 @@ impl WebSocketUsers {
/* Message Structure
[
1, // MessageType.Invocation
{}, // Headers
{}, // Headers (map)
null, // InvocationId
"ReceiveMessage", // Target
[ // Arguments
@ -352,7 +360,7 @@ fn create_update(payload: Vec<(Value, Value)>, ut: UpdateType) -> Vec<u8> {
let value = V::Array(vec![
1.into(),
V::Array(vec![]),
V::Map(vec![]),
V::Nil,
"ReceiveMessage".into(),
V::Array(vec![V::Map(vec![

View File

@ -78,9 +78,12 @@ fn static_files(filename: String) -> Result<Content<&'static [u8]>, Error> {
"hibp.png" => Ok(Content(ContentType::PNG, include_bytes!("../static/images/hibp.png"))),
"bootstrap.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/bootstrap.css"))),
"bootstrap-native-v4.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native-v4.js"))),
"bootstrap-native.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/bootstrap-native.js"))),
"md5.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/md5.js"))),
"identicon.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/identicon.js"))),
"datatables.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok(Content(ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.5.1.slim.js" => Ok(Content(ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.5.1.slim.js"))),
_ => err!(format!("Static file not found: {}", filename)),
}
}

View File

@ -215,12 +215,10 @@ pub fn generate_admin_claims() -> AdminJWTClaims {
//
// Bearer token authentication
//
use rocket::{
request::{FromRequest, Request, Outcome},
};
use rocket::request::{FromRequest, Outcome, Request};
use crate::db::{
models::{Device, User, UserOrgStatus, UserOrgType, UserOrganization},
models::{CollectionUser, Device, User, UserOrgStatus, UserOrgType, UserOrganization, UserStampException},
DbConn,
};
@ -298,7 +296,25 @@ impl<'a, 'r> FromRequest<'a, 'r> for Headers {
};
if user.security_stamp != claims.sstamp {
err_handler!("Invalid security stamp")
if let Some(stamp_exception) = user
.stamp_exception
.as_deref()
.and_then(|s| serde_json::from_str::<UserStampException>(s).ok())
{
let current_route = match request.route().and_then(|r| r.name) {
Some(name) => name,
_ => err_handler!("Error getting current route for stamp exception"),
};
// Check if both match, if not this route is not allowed with the current security stamp.
if stamp_exception.route != current_route {
err_handler!("Invalid security stamp: Current route and exception route do not match")
} else if stamp_exception.security_stamp != claims.sstamp {
err_handler!("Invalid security stamp for matched stamp exception")
}
} else {
err_handler!("Invalid security stamp")
}
}
Outcome::Success(Headers { host, device, user })
@ -310,6 +326,8 @@ pub struct OrgHeaders {
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
pub org_user: UserOrganization,
pub org_id: String,
}
// org_id is usually the second param ("/organizations/<org_id>")
@ -370,6 +388,8 @@ impl<'a, 'r> FromRequest<'a, 'r> for OrgHeaders {
err_handler!("Unknown user type in the database")
}
},
org_user,
org_id,
})
}
_ => err_handler!("Error getting the organization id"),
@ -419,6 +439,127 @@ impl Into<Headers> for AdminHeaders {
}
}
// col_id is usually the forth param ("/organizations/<org_id>/collections/<col_id>")
// But there cloud be cases where it is located in a query value.
// First check the param, if this is not a valid uuid, we will try the query value.
fn get_col_id(request: &Request) -> Option<String> {
if let Some(Ok(col_id)) = request.get_param::<String>(3) {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
}
}
if let Some(Ok(col_id)) = request.get_query_value::<String>("collectionId") {
if uuid::Uuid::parse_str(&col_id).is_ok() {
return Some(col_id);
}
}
None
}
/// The ManagerHeaders are used to check if you are at least a Manager
/// and have access to the specific collection provided via the <col_id>/collections/collectionId.
/// This does strict checking on the collection_id, ManagerHeadersLoose does not.
pub struct ManagerHeaders {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeaders {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Manager {
match get_col_id(request) {
Some(col_id) => {
let conn = match request.guard::<DbConn>() {
Outcome::Success(conn) => conn,
_ => err_handler!("Error getting DB"),
};
if !headers.org_user.access_all {
match CollectionUser::find_by_collection_and_user(&col_id, &headers.org_user.user_uuid, &conn) {
Some(_) => (),
None => err_handler!("The current user isn't a manager for this collection"),
}
}
}
_ => err_handler!("Error getting the collection id"),
}
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
}
}
}
impl Into<Headers> for ManagerHeaders {
fn into(self) -> Headers {
Headers {
host: self.host,
device: self.device,
user: self.user,
}
}
}
/// The ManagerHeadersLoose is used when you at least need to be a Manager,
/// but there is no collection_id sent with the request (either in the path or as form data).
pub struct ManagerHeadersLoose {
pub host: String,
pub device: Device,
pub user: User,
pub org_user_type: UserOrgType,
}
impl<'a, 'r> FromRequest<'a, 'r> for ManagerHeadersLoose {
type Error = &'static str;
fn from_request(request: &'a Request<'r>) -> Outcome<Self, Self::Error> {
match request.guard::<OrgHeaders>() {
Outcome::Forward(_) => Outcome::Forward(()),
Outcome::Failure(f) => Outcome::Failure(f),
Outcome::Success(headers) => {
if headers.org_user_type >= UserOrgType::Manager {
Outcome::Success(Self {
host: headers.host,
device: headers.device,
user: headers.user,
org_user_type: headers.org_user_type,
})
} else {
err_handler!("You need to be a Manager, Admin or Owner to call this endpoint")
}
}
}
}
}
impl Into<Headers> for ManagerHeadersLoose {
fn into(self) -> Headers {
Headers {
host: self.host,
device: self.device,
user: self.user,
}
}
}
pub struct OwnerHeaders {
pub host: String,
pub device: Device,

View File

@ -5,6 +5,7 @@ use once_cell::sync::Lazy;
use reqwest::Url;
use crate::{
db::DbConnType,
error::Error,
util::{get_env, get_env_bool},
};
@ -52,7 +53,32 @@ macro_rules! make_config {
impl ConfigBuilder {
fn from_env() -> Self {
dotenv::from_path(".env").ok();
match dotenv::from_path(".env") {
Ok(_) => (),
Err(e) => match e {
dotenv::Error::LineParse(msg, pos) => {
panic!("Error loading the .env file:\nNear {:?} on position {}\nPlease fix and restart!\n", msg, pos);
},
dotenv::Error::Io(ioerr) => match ioerr.kind() {
std::io::ErrorKind::NotFound => {
println!("[INFO] No .env file found.\n");
()
},
std::io::ErrorKind::PermissionDenied => {
println!("[WARNING] Permission Denied while trying to read the .env file!\n");
()
},
_ => {
println!("[WARNING] Reading the .env file failed:\n{:?}\n", ioerr);
()
}
},
_ => {
println!("[WARNING] Reading the .env file failed:\n{:?}\n", e);
()
}
}
};
let mut builder = ConfigBuilder::default();
$($(
@ -115,6 +141,7 @@ macro_rules! make_config {
config.domain_set = _domain_set;
config.signups_domains_whitelist = config.signups_domains_whitelist.trim().to_lowercase();
config.org_creation_users = config.org_creation_users.trim().to_lowercase();
config
}
@ -220,6 +247,8 @@ make_config! {
data_folder: String, false, def, "data".to_string();
/// Database URL
database_url: String, false, auto, |c| format!("{}/{}", c.data_folder, "db.sqlite3");
/// Database connection pool size
database_max_conns: u32, false, def, 10;
/// Icon cache folder
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache");
/// Attachments folder
@ -276,6 +305,9 @@ make_config! {
signups_verify_resend_limit: u32, true, def, 6;
/// Email domain whitelist |> Allow signups only from this list of comma-separated domains, even when signups are otherwise disabled
signups_domains_whitelist: String, true, def, "".to_string();
/// Org creation users |> Allow org creation only by this list of comma-separated user emails.
/// Blank or 'all' means all users can create orgs; 'none' means no users can create orgs.
org_creation_users: String, true, def, "".to_string();
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
invitations_allowed: bool, true, def, true;
/// Password iterations |> Number of server-side passwords hashing iterations.
@ -342,6 +374,9 @@ make_config! {
/// that do not support WAL. Please make sure you read project wiki on the topic before changing this setting.
enable_db_wal: bool, false, def, true;
/// Max database connection retries |> Number of times to retry the database connection during startup, with 1 second between each retry, set to 0 to retry indefinitely
db_connection_retries: u32, false, def, 15;
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
disable_admin_token: bool, true, def, false;
@ -378,36 +413,42 @@ make_config! {
/// SMTP Email Settings
smtp: _enable_smtp {
/// Enabled
_enable_smtp: bool, true, def, true;
_enable_smtp: bool, true, def, true;
/// Host
smtp_host: String, true, option;
/// Enable SSL
smtp_ssl: bool, true, def, true;
/// Use explicit TLS |> Enabling this would force the use of an explicit TLS connection, instead of upgrading an insecure one with STARTTLS
smtp_explicit_tls: bool, true, def, false;
smtp_host: String, true, option;
/// Enable Secure SMTP |> (Explicit) - Enabling this by default would use STARTTLS (Standard ports 587 or 25)
smtp_ssl: bool, true, def, true;
/// Force TLS |> (Implicit) - Enabling this would force the use of an SSL/TLS connection, instead of upgrading an insecure one with STARTTLS (Standard port 465)
smtp_explicit_tls: bool, true, def, false;
/// Port
smtp_port: u16, true, auto, |c| if c.smtp_explicit_tls {465} else if c.smtp_ssl {587} else {25};
smtp_port: u16, true, auto, |c| if c.smtp_explicit_tls {465} else if c.smtp_ssl {587} else {25};
/// From Address
smtp_from: String, true, def, String::new();
smtp_from: String, true, def, String::new();
/// From Name
smtp_from_name: String, true, def, "Bitwarden_RS".to_string();
smtp_from_name: String, true, def, "Bitwarden_RS".to_string();
/// Username
smtp_username: String, true, option;
smtp_username: String, true, option;
/// Password
smtp_password: Pass, true, option;
/// Json form auth mechanism |> Defaults for ssl is "Plain" and "Login" and nothing for non-ssl connections. Possible values: ["Plain", "Login", "Xoauth2"]
smtp_auth_mechanism: String, true, option;
smtp_password: Pass, true, option;
/// SMTP Auth mechanism |> Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections. Possible values: ["Plain", "Login", "Xoauth2"]. Multiple options need to be separated by a comma ','.
smtp_auth_mechanism: String, true, option;
/// SMTP connection timeout |> Number of seconds when to stop trying to connect to the SMTP server
smtp_timeout: u64, true, def, 15;
smtp_timeout: u64, true, def, 15;
/// Server name sent during HELO |> By default this value should be is on the machine's hostname, but might need to be changed in case it trips some anti-spam filters
helo_name: String, true, option;
helo_name: String, true, option;
/// Enable SMTP debugging (Know the risks!) |> DANGEROUS: Enabling this will output very detailed SMTP messages. This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
smtp_debug: bool, true, def, false;
/// Accept Invalid Certs (Know the risks!) |> DANGEROUS: Allow invalid certificates. This option introduces significant vulnerabilities to man-in-the-middle attacks!
smtp_accept_invalid_certs: bool, true, def, false;
/// Accept Invalid Hostnames (Know the risks!) |> DANGEROUS: Allow invalid hostnames. This option introduces significant vulnerabilities to man-in-the-middle attacks!
smtp_accept_invalid_hostnames: bool, true, def, false;
},
/// Email 2FA Settings
email_2fa: _enable_email_2fa {
/// Enabled |> Disabling will prevent users from setting up new email 2FA and using existing email 2FA configured
_enable_email_2fa: bool, true, auto, |c| c._enable_smtp && c.smtp_host.is_some();
/// Token number length |> Length of the numbers in an email token. Minimum of 6. Maximum is 19.
/// Email token size |> Number of digits in an email token (min: 6, max: 19). Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting.
email_token_size: u32, true, def, 6;
/// Token expiration time |> Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
email_expiration_time: u64, true, def, 600;
@ -417,24 +458,21 @@ make_config! {
}
fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
let db_url = cfg.database_url.to_lowercase();
if cfg!(feature = "sqlite")
&& (db_url.starts_with("mysql:") || db_url.starts_with("postgresql:") || db_url.starts_with("postgres:"))
{
err!("`DATABASE_URL` is meant for MySQL or Postgres, while this server is meant for SQLite")
}
if cfg!(feature = "mysql") && !db_url.starts_with("mysql:") {
err!("`DATABASE_URL` should start with mysql: when using the MySQL server")
}
// Validate connection URL is valid and DB feature is enabled
DbConnType::from_url(&cfg.database_url)?;
if cfg!(feature = "postgresql") && !(db_url.starts_with("postgresql:") || db_url.starts_with("postgres:")) {
err!("`DATABASE_URL` should start with postgresql: when using the PostgreSQL server")
let limit = 256;
if cfg.database_max_conns < 1 || cfg.database_max_conns > limit {
err!(format!(
"`DATABASE_MAX_CONNS` contains an invalid value. Ensure it is between 1 and {}.",
limit,
));
}
let dom = cfg.domain.to_lowercase();
if !dom.starts_with("http://") && !dom.starts_with("https://") {
err!("DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'");
err!("DOMAIN variable needs to contain the protocol (http, https). Use 'http[s]://bw.example.com' instead of 'bw.example.com'");
}
let whitelist = &cfg.signups_domains_whitelist;
@ -442,6 +480,13 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
err!("`SIGNUPS_DOMAINS_WHITELIST` contains empty tokens");
}
let org_creation_users = cfg.org_creation_users.trim().to_lowercase();
if !(org_creation_users.is_empty() || org_creation_users == "all" || org_creation_users == "none") {
if org_creation_users.split(',').any(|u| !u.contains('@')) {
err!("`ORG_CREATION_USERS` contains invalid email addresses");
}
}
if let Some(ref token) = cfg.admin_token {
if token.trim().is_empty() && !cfg.disable_admin_token {
println!("[WARNING] `ADMIN_TOKEN` is enabled but has an empty value, so the admin page will be disabled.");
@ -482,6 +527,16 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
}
}
// Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex {
use regex::Regex;
let validate_regex = Regex::new(&r);
match validate_regex {
Ok(_) => (),
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
}
}
Ok(())
}
@ -592,6 +647,19 @@ impl Config {
}
}
/// Tests whether the specified user is allowed to create an organization.
pub fn is_org_creation_allowed(&self, email: &str) -> bool {
let users = self.org_creation_users();
if users == "" || users == "all" {
true
} else if users == "none" {
false
} else {
let email = email.to_lowercase();
users.split(',').any(|u| u.trim() == email)
}
}
pub fn delete_user_config(&self) -> Result<(), Error> {
crate::util::delete_file(&CONFIG_FILE)?;

View File

@ -55,17 +55,21 @@ pub fn get_random(mut array: Vec<u8>) -> Vec<u8> {
}
pub fn generate_token(token_size: u32) -> Result<String, Error> {
// A u64 can represent all whole numbers up to 19 digits long.
if token_size > 19 {
err!("Generating token failed")
err!("Token size is limited to 19 digits")
}
// 8 bytes to create an u64 for up to 19 token digits
let bytes = get_random(vec![0; 8]);
let mut bytes_array = [0u8; 8];
bytes_array.copy_from_slice(&bytes);
let low: u64 = 0;
let high: u64 = 10u64.pow(token_size);
let number = u64::from_be_bytes(bytes_array) % 10u64.pow(token_size);
// Generate a random number in the range [low, high), then format it as a
// token of fixed width, left-padding with 0 as needed.
use rand::{thread_rng, Rng};
let mut rng = thread_rng();
let number: u64 = rng.gen_range(low, high);
let token = format!("{:0size$}", number, size = token_size as usize);
Ok(token)
}

View File

@ -1,51 +1,207 @@
use std::process::Command;
use chrono::prelude::*;
use diesel::{r2d2, r2d2::ConnectionManager, Connection as DieselConnection, ConnectionError};
use diesel::r2d2::{ConnectionManager, Pool, PooledConnection};
use rocket::{
http::Status,
request::{FromRequest, Outcome},
Request, State,
};
use crate::{error::Error, CONFIG};
use crate::{
error::{Error, MapResult},
CONFIG,
};
/// An alias to the database connection used
#[cfg(feature = "sqlite")]
type Connection = diesel::sqlite::SqliteConnection;
#[cfg(feature = "mysql")]
type Connection = diesel::mysql::MysqlConnection;
#[cfg(feature = "postgresql")]
type Connection = diesel::pg::PgConnection;
/// An alias to the type for a pool of Diesel connections.
type Pool = r2d2::Pool<ConnectionManager<Connection>>;
/// Connection request guard type: a wrapper around an r2d2 pooled connection.
pub struct DbConn(pub r2d2::PooledConnection<ConnectionManager<Connection>>);
pub mod models;
#[cfg(feature = "sqlite")]
#[cfg(sqlite)]
#[path = "schemas/sqlite/schema.rs"]
pub mod schema;
#[cfg(feature = "mysql")]
pub mod __sqlite_schema;
#[cfg(mysql)]
#[path = "schemas/mysql/schema.rs"]
pub mod schema;
#[cfg(feature = "postgresql")]
pub mod __mysql_schema;
#[cfg(postgresql)]
#[path = "schemas/postgresql/schema.rs"]
pub mod schema;
pub mod __postgresql_schema;
/// Initializes a database pool.
pub fn init_pool() -> Pool {
let manager = ConnectionManager::new(CONFIG.database_url());
r2d2::Pool::builder().build(manager).expect("Failed to create pool")
// This is used to generate the main DbConn and DbPool enums, which contain one variant for each database supported
macro_rules! generate_connections {
( $( $name:ident: $ty:ty ),+ ) => {
#[allow(non_camel_case_types, dead_code)]
#[derive(Eq, PartialEq)]
pub enum DbConnType { $( $name, )+ }
#[allow(non_camel_case_types)]
pub enum DbConn { $( #[cfg($name)] $name(PooledConnection<ConnectionManager< $ty >>), )+ }
#[allow(non_camel_case_types)]
pub enum DbPool { $( #[cfg($name)] $name(Pool<ConnectionManager< $ty >>), )+ }
impl DbPool {
// For the given database URL, guess it's type, run migrations create pool and return it
pub fn from_config() -> Result<Self, Error> {
let url = CONFIG.database_url();
let conn_type = DbConnType::from_url(&url)?;
match conn_type { $(
DbConnType::$name => {
#[cfg($name)]
{
paste::paste!{ [< $name _migrations >]::run_migrations()?; }
let manager = ConnectionManager::new(&url);
let pool = Pool::builder()
.max_size(CONFIG.database_max_conns())
.build(manager)
.map_res("Failed to create pool")?;
return Ok(Self::$name(pool));
}
#[cfg(not($name))]
#[allow(unreachable_code)]
return unreachable!("Trying to use a DB backend when it's feature is disabled");
},
)+ }
}
// Get a connection from the pool
pub fn get(&self) -> Result<DbConn, Error> {
match self { $(
#[cfg($name)]
Self::$name(p) => Ok(DbConn::$name(p.get().map_res("Error retrieving connection from pool")?)),
)+ }
}
}
};
}
pub fn get_connection() -> Result<Connection, ConnectionError> {
Connection::establish(&CONFIG.database_url())
generate_connections! {
sqlite: diesel::sqlite::SqliteConnection,
mysql: diesel::mysql::MysqlConnection,
postgresql: diesel::pg::PgConnection
}
impl DbConnType {
pub fn from_url(url: &str) -> Result<DbConnType, Error> {
// Mysql
if url.starts_with("mysql:") {
#[cfg(mysql)]
return Ok(DbConnType::mysql);
#[cfg(not(mysql))]
err!("`DATABASE_URL` is a MySQL URL, but the 'mysql' feature is not enabled")
// Postgres
} else if url.starts_with("postgresql:") || url.starts_with("postgres:") {
#[cfg(postgresql)]
return Ok(DbConnType::postgresql);
#[cfg(not(postgresql))]
err!("`DATABASE_URL` is a PostgreSQL URL, but the 'postgresql' feature is not enabled")
//Sqlite
} else {
#[cfg(sqlite)]
return Ok(DbConnType::sqlite);
#[cfg(not(sqlite))]
err!("`DATABASE_URL` looks like a SQLite URL, but 'sqlite' feature is not enabled")
}
}
}
#[macro_export]
macro_rules! db_run {
// Same for all dbs
( $conn:ident: $body:block ) => {
db_run! { $conn: sqlite, mysql, postgresql $body }
};
// Different code for each db
( $conn:ident: $( $($db:ident),+ $body:block )+ ) => {
#[allow(unused)] use diesel::prelude::*;
match $conn {
$($(
#[cfg($db)]
crate::db::DbConn::$db(ref $conn) => {
paste::paste! {
#[allow(unused)] use crate::db::[<__ $db _schema>]::{self as schema, *};
#[allow(unused)] use [<__ $db _model>]::*;
#[allow(unused)] use crate::db::FromDb;
}
$body
},
)+)+
}
};
}
pub trait FromDb {
type Output;
#[allow(clippy::wrong_self_convention)]
fn from_db(self) -> Self::Output;
}
// For each struct eg. Cipher, we create a CipherDb inside a module named __$db_model (where $db is sqlite, mysql or postgresql),
// to implement the Diesel traits. We also provide methods to convert between them and the basic structs. Later, that module will be auto imported when using db_run!
#[macro_export]
macro_rules! db_object {
( $(
$( #[$attr:meta] )*
pub struct $name:ident {
$( $( #[$field_attr:meta] )* $vis:vis $field:ident : $typ:ty ),+
$(,)?
}
)+ ) => {
// Create the normal struct, without attributes
$( pub struct $name { $( /*$( #[$field_attr] )**/ $vis $field : $typ, )+ } )+
#[cfg(sqlite)]
pub mod __sqlite_model { $( db_object! { @db sqlite | $( #[$attr] )* | $name | $( $( #[$field_attr] )* $field : $typ ),+ } )+ }
#[cfg(mysql)]
pub mod __mysql_model { $( db_object! { @db mysql | $( #[$attr] )* | $name | $( $( #[$field_attr] )* $field : $typ ),+ } )+ }
#[cfg(postgresql)]
pub mod __postgresql_model { $( db_object! { @db postgresql | $( #[$attr] )* | $name | $( $( #[$field_attr] )* $field : $typ ),+ } )+ }
};
( @db $db:ident | $( #[$attr:meta] )* | $name:ident | $( $( #[$field_attr:meta] )* $vis:vis $field:ident : $typ:ty),+) => {
paste::paste! {
#[allow(unused)] use super::*;
#[allow(unused)] use diesel::prelude::*;
#[allow(unused)] use crate::db::[<__ $db _schema>]::*;
$( #[$attr] )*
pub struct [<$name Db>] { $(
$( #[$field_attr] )* $vis $field : $typ,
)+ }
impl [<$name Db>] {
#[allow(clippy::wrong_self_convention)]
#[inline(always)] pub fn to_db(x: &super::$name) -> Self { Self { $( $field: x.$field.clone(), )+ } }
}
impl crate::db::FromDb for [<$name Db>] {
type Output = super::$name;
#[inline(always)] fn from_db(self) -> Self::Output { super::$name { $( $field: self.$field, )+ } }
}
impl crate::db::FromDb for Vec<[<$name Db>]> {
type Output = Vec<super::$name>;
#[inline(always)] fn from_db(self) -> Self::Output { self.into_iter().map(crate::db::FromDb::from_db).collect() }
}
impl crate::db::FromDb for Option<[<$name Db>]> {
type Output = Option<super::$name>;
#[inline(always)] fn from_db(self) -> Self::Output { self.map(crate::db::FromDb::from_db) }
}
}
};
}
// Reexport the models, needs to be after the macros are defined so it can access them
pub mod models;
/// Creates a back-up of the database using sqlite3
pub fn backup_database() -> Result<(), Error> {
use std::path::Path;
@ -73,18 +229,102 @@ impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
fn from_request(request: &'a Request<'r>) -> Outcome<DbConn, ()> {
// https://github.com/SergioBenitez/Rocket/commit/e3c1a4ad3ab9b840482ec6de4200d30df43e357c
let pool = try_outcome!(request.guard::<State<Pool>>());
let pool = try_outcome!(request.guard::<State<DbPool>>());
match pool.get() {
Ok(conn) => Outcome::Success(DbConn(conn)),
Ok(conn) => Outcome::Success(conn),
Err(_) => Outcome::Failure((Status::ServiceUnavailable, ())),
}
}
}
// For the convenience of using an &DbConn as a &Database.
impl std::ops::Deref for DbConn {
type Target = Connection;
fn deref(&self) -> &Self::Target {
&self.0
// Embed the migrations from the migrations folder into the application
// This way, the program automatically migrates the database to the latest version
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
#[cfg(sqlite)]
mod sqlite_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/sqlite");
pub fn run_migrations() -> Result<(), super::Error> {
// Make sure the directory exists
let url = crate::CONFIG.database_url();
let path = std::path::Path::new(&url);
if let Some(parent) = path.parent() {
if std::fs::create_dir_all(parent).is_err() {
error!("Error creating database directory");
std::process::exit(1);
}
}
use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection =
diesel::sqlite::SqliteConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration
// Scoped to a connection.
diesel::sql_query("PRAGMA foreign_keys = OFF")
.execute(&connection)
.expect("Failed to disable Foreign Key Checks during migrations");
// Turn on WAL in SQLite
if crate::CONFIG.enable_db_wal() {
diesel::sql_query("PRAGMA journal_mode=wal")
.execute(&connection)
.expect("Failed to turn on WAL");
}
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
Ok(())
}
}
#[cfg(mysql)]
mod mysql_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/mysql");
pub fn run_migrations() -> Result<(), super::Error> {
use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection =
diesel::mysql::MysqlConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration
// Scoped to a connection/session.
diesel::sql_query("SET FOREIGN_KEY_CHECKS = 0")
.execute(&connection)
.expect("Failed to disable Foreign Key Checks during migrations");
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
Ok(())
}
}
#[cfg(postgresql)]
mod postgresql_migrations {
#[allow(unused_imports)]
embed_migrations!("migrations/postgresql");
pub fn run_migrations() -> Result<(), super::Error> {
use diesel::{Connection, RunQueryDsl};
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection =
diesel::pg::PgConnection::establish(&crate::CONFIG.database_url())?;
// Disable Foreign Key Checks during migration
// FIXME: Per https://www.postgresql.org/docs/12/sql-set-constraints.html,
// "SET CONSTRAINTS sets the behavior of constraint checking within the
// current transaction", so this setting probably won't take effect for
// any of the migrations since it's being run outside of a transaction.
// Migrations that need to disable foreign key checks should run this
// from within the migration script itself.
diesel::sql_query("SET CONSTRAINTS ALL DEFERRED")
.execute(&connection)
.expect("Failed to disable Foreign Key Checks during migrations");
embedded_migrations::run_with_output(&connection, &mut std::io::stdout())?;
Ok(())
}
}

View File

@ -3,17 +3,19 @@ use serde_json::Value;
use super::Cipher;
use crate::CONFIG;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "attachments"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[primary_key(id)]
pub struct Attachment {
pub id: String,
pub cipher_uuid: String,
pub file_name: String,
pub file_size: i32,
pub akey: Option<String>,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "attachments"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(super::Cipher, foreign_key = "cipher_uuid")]
#[primary_key(id)]
pub struct Attachment {
pub id: String,
pub cipher_uuid: String,
pub file_name: String,
pub file_size: i32,
pub akey: Option<String>,
}
}
/// Local methods
@ -50,43 +52,57 @@ impl Attachment {
}
}
use crate::db::schema::{attachments, ciphers};
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods
impl Attachment {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::insert_into(attachments::table)
.values(self)
.on_conflict(attachments::id)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error saving attachment")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::replace_into(attachments::table)
.values(self)
.execute(&**conn)
.map_res("Error saving attachment")
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(attachments::table)
.values(AttachmentDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(attachments::table)
.filter(attachments::id.eq(&self.id))
.set(AttachmentDb::to_db(self))
.execute(conn)
.map_res("Error saving attachment")
}
Err(e) => Err(e.into()),
}.map_res("Error saving attachment")
}
postgresql {
let value = AttachmentDb::to_db(self);
diesel::insert_into(attachments::table)
.values(&value)
.on_conflict(attachments::id)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving attachment")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(&**conn),
10,
)
.map_res("Error deleting attachment")?;
db_run! { conn: {
crate::util::retry(
|| diesel::delete(attachments::table.filter(attachments::id.eq(&self.id))).execute(conn),
10,
)
.map_res("Error deleting attachment")?;
crate::util::delete_file(&self.get_file_path())?;
Ok(())
crate::util::delete_file(&self.get_file_path())?;
Ok(())
}}
}
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -97,67 +113,78 @@ impl Attachment {
}
pub fn find_by_id(id: &str, conn: &DbConn) -> Option<Self> {
let id = id.to_lowercase();
attachments::table
.filter(attachments::id.eq(id))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
attachments::table
.filter(attachments::id.eq(id.to_lowercase()))
.first::<AttachmentDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_cipher(cipher_uuid: &str, conn: &DbConn) -> Vec<Self> {
attachments::table
.filter(attachments::cipher_uuid.eq(cipher_uuid))
.load::<Self>(&**conn)
.expect("Error loading attachments")
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq(cipher_uuid))
.load::<AttachmentDb>(conn)
.expect("Error loading attachments")
.from_db()
}}
}
pub fn find_by_ciphers(cipher_uuids: Vec<String>, conn: &DbConn) -> Vec<Self> {
attachments::table
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))
.load::<Self>(&**conn)
.expect("Error loading attachments")
db_run! { conn: {
attachments::table
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))
.load::<AttachmentDb>(conn)
.expect("Error loading attachments")
.from_db()
}}
}
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(&**conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}}
}
pub fn count_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first(conn)
.unwrap_or(0)
}}
}
pub fn size_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(&**conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
db_run! { conn: {
let result: Option<i64> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
}}
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first(&**conn)
.ok()
.unwrap_or(0)
db_run! { conn: {
attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first(conn)
.unwrap_or(0)
}}
}
}

View File

@ -2,39 +2,48 @@ use chrono::{NaiveDateTime, Utc};
use serde_json::Value;
use super::{
Attachment, CollectionCipher, FolderCipher, Organization, User, UserOrgStatus, UserOrgType, UserOrganization,
Attachment,
CollectionCipher,
Favorite,
FolderCipher,
Organization,
User,
UserOrgStatus,
UserOrgType,
UserOrganization,
};
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "ciphers"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Organization, foreign_key = "organization_uuid")]
#[primary_key(uuid)]
pub struct Cipher {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "ciphers"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Organization, foreign_key = "organization_uuid")]
#[primary_key(uuid)]
pub struct Cipher {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: Option<String>,
pub organization_uuid: Option<String>,
pub user_uuid: Option<String>,
pub organization_uuid: Option<String>,
/*
Login = 1,
SecureNote = 2,
Card = 3,
Identity = 4
*/
pub atype: i32,
pub name: String,
pub notes: Option<String>,
pub fields: Option<String>,
/*
Login = 1,
SecureNote = 2,
Card = 3,
Identity = 4
*/
pub atype: i32,
pub name: String,
pub notes: Option<String>,
pub fields: Option<String>,
pub data: String,
pub data: String,
pub favorite: bool,
pub password_history: Option<String>,
pub deleted_at: Option<NaiveDateTime>,
pub password_history: Option<String>,
pub deleted_at: Option<NaiveDateTime>,
}
}
/// Local methods
@ -51,7 +60,6 @@ impl Cipher {
organization_uuid: None,
atype,
favorite: false,
name,
notes: None,
@ -64,9 +72,7 @@ impl Cipher {
}
}
use crate::db::schema::*;
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
@ -83,7 +89,7 @@ impl Cipher {
let password_history_json = self.password_history.as_ref().and_then(|s| serde_json::from_str(s).ok()).unwrap_or(Value::Null);
let (read_only, hide_passwords) =
match self.get_access_restrictions(&user_uuid, &conn) {
match self.get_access_restrictions(&user_uuid, conn) {
Some((ro, hp)) => (ro, hp),
None => {
error!("Cipher ownership assertion failure");
@ -127,14 +133,14 @@ impl Cipher {
"Type": self.atype,
"RevisionDate": format_date(&self.updated_at),
"DeletedDate": self.deleted_at.map_or(Value::Null, |d| Value::String(format_date(&d))),
"FolderId": self.get_folder_uuid(&user_uuid, &conn),
"Favorite": self.favorite,
"FolderId": self.get_folder_uuid(&user_uuid, conn),
"Favorite": self.is_favorite(&user_uuid, conn),
"OrganizationId": self.organization_uuid,
"Attachments": attachments_json,
"OrganizationUseTotp": true,
// This field is specific to the cipherDetails type.
"CollectionIds": self.get_collections(user_uuid, &conn),
"CollectionIds": self.get_collections(user_uuid, conn),
"Name": self.name,
"Notes": self.notes,
@ -185,41 +191,54 @@ impl Cipher {
user_uuids
}
#[cfg(feature = "postgresql")]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
self.updated_at = Utc::now().naive_utc();
diesel::insert_into(ciphers::table)
.values(&*self)
.on_conflict(ciphers::uuid)
.do_update()
.set(&*self)
.execute(&**conn)
.map_res("Error saving cipher")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
self.updated_at = Utc::now().naive_utc();
diesel::replace_into(ciphers::table)
.values(&*self)
.execute(&**conn)
.map_res("Error saving cipher")
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(ciphers::table)
.values(CipherDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(ciphers::table)
.filter(ciphers::uuid.eq(&self.uuid))
.set(CipherDb::to_db(self))
.execute(conn)
.map_res("Error saving cipher")
}
Err(e) => Err(e.into()),
}.map_res("Error saving cipher")
}
postgresql {
let value = CipherDb::to_db(self);
diesel::insert_into(ciphers::table)
.values(&value)
.on_conflict(ciphers::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving cipher")
}
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
FolderCipher::delete_all_by_cipher(&self.uuid, &conn)?;
CollectionCipher::delete_all_by_cipher(&self.uuid, &conn)?;
Attachment::delete_all_by_cipher(&self.uuid, &conn)?;
FolderCipher::delete_all_by_cipher(&self.uuid, conn)?;
CollectionCipher::delete_all_by_cipher(&self.uuid, conn)?;
Attachment::delete_all_by_cipher(&self.uuid, conn)?;
Favorite::delete_all_by_cipher(&self.uuid, conn)?;
diesel::delete(ciphers::table.filter(ciphers::uuid.eq(&self.uuid)))
.execute(&**conn)
.map_res("Error deleting cipher")
db_run! { conn: {
diesel::delete(ciphers::table.filter(ciphers::uuid.eq(&self.uuid)))
.execute(conn)
.map_res("Error deleting cipher")
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -237,28 +256,28 @@ impl Cipher {
}
pub fn move_to_folder(&self, folder_uuid: Option<String>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(user_uuid, &conn);
User::update_uuid_revision(user_uuid, conn);
match (self.get_folder_uuid(&user_uuid, &conn), folder_uuid) {
match (self.get_folder_uuid(&user_uuid, conn), folder_uuid) {
// No changes
(None, None) => Ok(()),
(Some(ref old), Some(ref new)) if old == new => Ok(()),
// Add to folder
(None, Some(new)) => FolderCipher::new(&new, &self.uuid).save(&conn),
(None, Some(new)) => FolderCipher::new(&new, &self.uuid).save(conn),
// Remove from folder
(Some(old), None) => match FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, &conn) {
Some(old) => old.delete(&conn),
(Some(old), None) => match FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn) {
Some(old) => old.delete(conn),
None => err!("Couldn't move from previous folder"),
},
// Move to another folder
(Some(old), Some(new)) => {
if let Some(old) = FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, &conn) {
old.delete(&conn)?;
if let Some(old) = FolderCipher::find_by_folder_and_cipher(&old, &self.uuid, conn) {
old.delete(conn)?;
}
FolderCipher::new(&new, &self.uuid).save(&conn)
FolderCipher::new(&new, &self.uuid).save(conn)
}
}
}
@ -271,7 +290,7 @@ impl Cipher {
/// Returns whether this cipher is owned by an org in which the user has full access.
pub fn is_in_full_access_org(&self, user_uuid: &str, conn: &DbConn) -> bool {
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user_uuid, &org_uuid, &conn) {
if let Some(user_org) = UserOrganization::find_by_user_and_org(&user_uuid, &org_uuid, conn) {
return user_org.has_full_access();
}
}
@ -292,38 +311,40 @@ impl Cipher {
return Some((false, false));
}
// Check whether this cipher is in any collections accessible to the
// user. If so, retrieve the access flags for each collection.
let query = ciphers::table
.filter(ciphers::uuid.eq(&self.uuid))
.inner_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)))
.inner_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid))))
.select((users_collections::read_only, users_collections::hide_passwords));
db_run! {conn: {
// Check whether this cipher is in any collections accessible to the
// user. If so, retrieve the access flags for each collection.
let query = ciphers::table
.filter(ciphers::uuid.eq(&self.uuid))
.inner_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)))
.inner_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
.and(users_collections::user_uuid.eq(user_uuid))))
.select((users_collections::read_only, users_collections::hide_passwords));
// There's an edge case where a cipher can be in multiple collections
// with inconsistent access flags. For example, a cipher could be in
// one collection where the user has read-only access, but also in
// another collection where the user has read/write access. To handle
// this, we do a boolean OR of all values in each of the `read_only`
// and `hide_passwords` columns. This could ideally be done as part
// of the query, but Diesel doesn't support a max() or bool_or()
// function on booleans and this behavior isn't portable anyway.
if let Some(vec) = query.load::<(bool, bool)>(&**conn).ok() {
let mut read_only = false;
let mut hide_passwords = false;
for (ro, hp) in vec.iter() {
read_only |= ro;
hide_passwords |= hp;
// There's an edge case where a cipher can be in multiple collections
// with inconsistent access flags. For example, a cipher could be in
// one collection where the user has read-only access, but also in
// another collection where the user has read/write access. To handle
// this, we do a boolean OR of all values in each of the `read_only`
// and `hide_passwords` columns. This could ideally be done as part
// of the query, but Diesel doesn't support a max() or bool_or()
// function on booleans and this behavior isn't portable anyway.
if let Ok(vec) = query.load::<(bool, bool)>(conn) {
let mut read_only = false;
let mut hide_passwords = false;
for (ro, hp) in vec.iter() {
read_only |= ro;
hide_passwords |= hp;
}
Some((read_only, hide_passwords))
} else {
// This cipher isn't in any collections accessible to the user.
None
}
Some((read_only, hide_passwords))
} else {
// This cipher isn't in any collections accessible to the user.
None
}
}}
}
pub fn is_write_accessible_to_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
@ -337,113 +358,164 @@ impl Cipher {
self.get_access_restrictions(&user_uuid, &conn).is_some()
}
// Returns whether this cipher is a favorite of the specified user.
pub fn is_favorite(&self, user_uuid: &str, conn: &DbConn) -> bool {
Favorite::is_favorite(&self.uuid, user_uuid, conn)
}
// Sets whether this cipher is a favorite of the specified user.
pub fn set_favorite(&self, favorite: Option<bool>, user_uuid: &str, conn: &DbConn) -> EmptyResult {
match favorite {
None => Ok(()), // No change requested.
Some(status) => Favorite::set_favorite(status, &self.uuid, user_uuid, conn),
}
}
pub fn get_folder_uuid(&self, user_uuid: &str, conn: &DbConn) -> Option<String> {
folders_ciphers::table
.inner_join(folders::table)
.filter(folders::user_uuid.eq(&user_uuid))
.filter(folders_ciphers::cipher_uuid.eq(&self.uuid))
.select(folders_ciphers::folder_uuid)
.first::<String>(&**conn)
.ok()
db_run! {conn: {
folders_ciphers::table
.inner_join(folders::table)
.filter(folders::user_uuid.eq(&user_uuid))
.filter(folders_ciphers::cipher_uuid.eq(&self.uuid))
.select(folders_ciphers::folder_uuid)
.first::<String>(conn)
.ok()
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
ciphers::table
.filter(ciphers::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(uuid))
.first::<CipherDb>(conn)
.ok()
.from_db()
}}
}
// Find all ciphers accessible to user
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
ciphers::table
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable()).and(
users_organizations::user_uuid.eq(user_uuid).and(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
)
))
.left_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
))
.filter(ciphers::user_uuid.eq(user_uuid).or( // Cipher owner
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32).or( // Org admin or owner
users_collections::user_uuid.eq(user_uuid).and( // Access to Collection
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
)
)
))
.select(ciphers::all_columns)
.distinct()
.load::<Self>(&**conn).expect("Error loading ciphers")
// Find all ciphers accessible or visible to the specified user.
//
// "Accessible" means the user has read access to the cipher, either via
// direct ownership or via collection access.
//
// "Visible" usually means the same as accessible, except when an org
// owner/admin sets their account to have access to only selected
// collections in the org (presumably because they aren't interested in
// the other collections in the org). In this case, if `visible_only` is
// true, then the non-interesting ciphers will not be returned. As a
// result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view.
pub fn find_by_user(user_uuid: &str, visible_only: bool, conn: &DbConn) -> Vec<Self> {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
// Ensure that users_collections::user_uuid is NULL for unconfirmed users.
.and(users_organizations::user_uuid.eq(users_collections::user_uuid))
))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.into_boxed();
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
);
}
query
.select(ciphers::all_columns)
.distinct()
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
// Find all ciphers directly owned by user
// Find all ciphers visible to the specified user.
pub fn find_by_user_visible(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
Self::find_by_user(user_uuid, true, conn)
}
// Find all ciphers directly owned by the specified user.
pub fn find_owned_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
.load::<Self>(&**conn).expect("Error loading ciphers")
db_run! {conn: {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
pub fn count_owned_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
db_run! {conn: {
ciphers::table
.filter(ciphers::user_uuid.eq(user_uuid))
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0)
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
.load::<Self>(&**conn).expect("Error loading ciphers")
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
db_run! {conn: {
ciphers::table
.filter(ciphers::organization_uuid.eq(org_uuid))
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0)
}}
}
pub fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
.select(ciphers::all_columns)
.load::<Self>(&**conn).expect("Error loading ciphers")
db_run! {conn: {
folders_ciphers::table.inner_join(ciphers::table)
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
.select(ciphers::all_columns)
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
pub fn get_collections(&self, user_id: &str, conn: &DbConn) -> Vec<String> {
ciphers_collections::table
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id)
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id)
)
))
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(user_id).or( // User has access to collection
users_organizations::access_all.eq(true).or( // User has access all
users_organizations::atype.le(UserOrgType::Admin as i32) // User is admin or owner
)
))
.select(ciphers_collections::collection_uuid)
.load::<String>(&**conn).unwrap_or_default()
db_run! {conn: {
ciphers_collections::table
.inner_join(collections::table.on(
collections::uuid.eq(ciphers_collections::collection_uuid)
))
.inner_join(users_organizations::table.on(
users_organizations::org_uuid.eq(collections::org_uuid).and(
users_organizations::user_uuid.eq(user_id)
)
))
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(ciphers_collections::collection_uuid).and(
users_collections::user_uuid.eq(user_id)
)
))
.filter(ciphers_collections::cipher_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(user_id).or( // User has access to collection
users_organizations::access_all.eq(true).or( // User has access all
users_organizations::atype.le(UserOrgType::Admin as i32) // User is admin or owner
)
))
.select(ciphers_collections::collection_uuid)
.load::<String>(conn).unwrap_or_default()
}}
}
}

View File

@ -1,15 +1,39 @@
use serde_json::Value;
use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
use super::{Organization, UserOrgStatus, UserOrgType, UserOrganization, User, Cipher};
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "collections"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct Collection {
pub uuid: String,
pub org_uuid: String,
pub name: String,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "collections"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct Collection {
pub uuid: String,
pub org_uuid: String,
pub name: String,
}
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "users_collections"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")]
#[primary_key(user_uuid, collection_uuid)]
pub struct CollectionUser {
pub user_uuid: String,
pub collection_uuid: String,
pub read_only: bool,
pub hide_passwords: bool,
}
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "ciphers_collections"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")]
#[primary_key(cipher_uuid, collection_uuid)]
pub struct CollectionCipher {
pub cipher_uuid: String,
pub collection_uuid: String,
}
}
/// Local methods
@ -33,36 +57,45 @@ impl Collection {
}
}
use crate::db::schema::*;
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods
impl Collection {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
diesel::insert_into(collections::table)
.values(self)
.on_conflict(collections::uuid)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error saving collection")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
self.update_users_revision(conn);
diesel::replace_into(collections::table)
.values(self)
.execute(&**conn)
.map_res("Error saving collection")
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(collections::table)
.values(CollectionDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(collections::table)
.filter(collections::uuid.eq(&self.uuid))
.set(CollectionDb::to_db(self))
.execute(conn)
.map_res("Error saving collection")
}
Err(e) => Err(e.into()),
}.map_res("Error saving collection")
}
postgresql {
let value = CollectionDb::to_db(self);
diesel::insert_into(collections::table)
.values(&value)
.on_conflict(collections::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving collection")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
@ -70,9 +103,11 @@ impl Collection {
CollectionCipher::delete_all_by_collection(&self.uuid, &conn)?;
CollectionUser::delete_all_by_collection(&self.uuid, &conn)?;
diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error deleting collection")
db_run! { conn: {
diesel::delete(collections::table.filter(collections::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error deleting collection")
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -91,33 +126,38 @@ impl Collection {
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
collections::table
.filter(collections::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
.first::<CollectionDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_user_uuid(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid)
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid)
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
)
).select(collections::all_columns)
.load::<Self>(&**conn).expect("Error loading collections")
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
)
).select(collections::all_columns)
.load::<CollectionDb>(conn).expect("Error loading collections").from_db()
}}
}
pub fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
@ -128,155 +168,178 @@ impl Collection {
}
pub fn find_by_organization(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
collections::table
.filter(collections::org_uuid.eq(org_uuid))
.load::<Self>(&**conn)
.expect("Error loading collections")
db_run! { conn: {
collections::table
.filter(collections::org_uuid.eq(org_uuid))
.load::<CollectionDb>(conn)
.expect("Error loading collections")
.from_db()
}}
}
pub fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
collections::table
.filter(collections::uuid.eq(uuid))
.filter(collections::org_uuid.eq(org_uuid))
.select(collections::all_columns)
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
collections::table
.filter(collections::uuid.eq(uuid))
.filter(collections::org_uuid.eq(org_uuid))
.select(collections::all_columns)
.first::<CollectionDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_uuid_and_user(uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid)
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid)
)
)
).select(collections::all_columns)
.first::<Self>(&**conn).ok()
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)
)
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
.from_db()
}}
}
pub fn is_writable_by_user(&self, user_uuid: &str, conn: &DbConn) -> bool {
match UserOrganization::find_by_user_and_org(&user_uuid, &self.org_uuid, &conn) {
None => false, // Not in Org
Some(user_org) => {
if user_org.access_all {
true
} else {
users_collections::table
.inner_join(collections::table)
.filter(users_collections::collection_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(&user_uuid))
.filter(users_collections::read_only.eq(false))
.select(collections::all_columns)
.first::<Self>(&**conn)
.ok()
.is_some() // Read only or no access to collection
if user_org.has_full_access() {
return true;
}
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(&self.uuid))
.filter(users_collections::user_uuid.eq(user_uuid))
.filter(users_collections::read_only.eq(false))
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0) != 0
}}
}
}
}
}
use super::User;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "users_collections"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")]
#[primary_key(user_uuid, collection_uuid)]
pub struct CollectionUser {
pub user_uuid: String,
pub collection_uuid: String,
pub read_only: bool,
pub hide_passwords: bool,
}
/// Database methods
impl CollectionUser {
pub fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
.inner_join(collections::table.on(collections::uuid.eq(users_collections::collection_uuid)))
.filter(collections::org_uuid.eq(org_uuid))
.select(users_collections::all_columns)
.load::<Self>(&**conn)
.expect("Error loading users_collections")
db_run! { conn: {
users_collections::table
.filter(users_collections::user_uuid.eq(user_uuid))
.inner_join(collections::table.on(collections::uuid.eq(users_collections::collection_uuid)))
.filter(collections::org_uuid.eq(org_uuid))
.select(users_collections::all_columns)
.load::<CollectionUserDb>(conn)
.expect("Error loading users_collections")
.from_db()
}}
}
#[cfg(feature = "postgresql")]
pub fn save(user_uuid: &str, collection_uuid: &str, read_only: bool, hide_passwords: bool, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&user_uuid, conn);
diesel::insert_into(users_collections::table)
.values((
users_collections::user_uuid.eq(user_uuid),
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.on_conflict((users_collections::user_uuid, users_collections::collection_uuid))
.do_update()
.set((
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.execute(&**conn)
.map_res("Error adding user to collection")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(user_uuid: &str, collection_uuid: &str, read_only: bool, hide_passwords: bool, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&user_uuid, conn);
diesel::replace_into(users_collections::table)
.values((
users_collections::user_uuid.eq(user_uuid),
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.execute(&**conn)
.map_res("Error adding user to collection")
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(users_collections::table)
.values((
users_collections::user_uuid.eq(user_uuid),
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(users_collections::table)
.filter(users_collections::user_uuid.eq(user_uuid))
.filter(users_collections::collection_uuid.eq(collection_uuid))
.set((
users_collections::user_uuid.eq(user_uuid),
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.execute(conn)
.map_res("Error adding user to collection")
}
Err(e) => Err(e.into()),
}.map_res("Error adding user to collection")
}
postgresql {
diesel::insert_into(users_collections::table)
.values((
users_collections::user_uuid.eq(user_uuid),
users_collections::collection_uuid.eq(collection_uuid),
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.on_conflict((users_collections::user_uuid, users_collections::collection_uuid))
.do_update()
.set((
users_collections::read_only.eq(read_only),
users_collections::hide_passwords.eq(hide_passwords),
))
.execute(conn)
.map_res("Error adding user to collection")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
diesel::delete(
users_collections::table
.filter(users_collections::user_uuid.eq(&self.user_uuid))
.filter(users_collections::collection_uuid.eq(&self.collection_uuid)),
)
.execute(&**conn)
.map_res("Error removing user from collection")
db_run! { conn: {
diesel::delete(
users_collections::table
.filter(users_collections::user_uuid.eq(&self.user_uuid))
.filter(users_collections::collection_uuid.eq(&self.collection_uuid)),
)
.execute(conn)
.map_res("Error removing user from collection")
}}
}
pub fn find_by_collection(collection_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
.select(users_collections::all_columns)
.load::<Self>(&**conn)
.expect("Error loading users_collections")
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
.select(users_collections::all_columns)
.load::<CollectionUserDb>(conn)
.expect("Error loading users_collections")
.from_db()
}}
}
pub fn find_by_collection_and_user(collection_uuid: &str, user_uuid: &str, conn: &DbConn) -> Option<Self> {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
.filter(users_collections::user_uuid.eq(user_uuid))
.select(users_collections::all_columns)
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
users_collections::table
.filter(users_collections::collection_uuid.eq(collection_uuid))
.filter(users_collections::user_uuid.eq(user_uuid))
.select(users_collections::all_columns)
.first::<CollectionUserDb>(conn)
.ok()
.from_db()
}}
}
pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -286,81 +349,91 @@ impl CollectionUser {
User::update_uuid_revision(&collection.user_uuid, conn);
});
diesel::delete(users_collections::table.filter(users_collections::collection_uuid.eq(collection_uuid)))
.execute(&**conn)
.map_res("Error deleting users from collection")
db_run! { conn: {
diesel::delete(users_collections::table.filter(users_collections::collection_uuid.eq(collection_uuid)))
.execute(conn)
.map_res("Error deleting users from collection")
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&user_uuid, conn);
pub fn delete_all_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> EmptyResult {
let collectionusers = Self::find_by_organization_and_user_uuid(org_uuid, user_uuid, conn);
diesel::delete(users_collections::table.filter(users_collections::user_uuid.eq(user_uuid)))
.execute(&**conn)
.map_res("Error removing user from collections")
db_run! { conn: {
for user in collectionusers {
diesel::delete(users_collections::table.filter(
users_collections::user_uuid.eq(user_uuid)
.and(users_collections::collection_uuid.eq(user.collection_uuid))
))
.execute(conn)
.map_res("Error removing user from collections")?;
}
Ok(())
}}
}
}
use super::Cipher;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "ciphers_collections"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Collection, foreign_key = "collection_uuid")]
#[primary_key(cipher_uuid, collection_uuid)]
pub struct CollectionCipher {
pub cipher_uuid: String,
pub collection_uuid: String,
}
/// Database methods
impl CollectionCipher {
#[cfg(feature = "postgresql")]
pub fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(&collection_uuid, conn);
diesel::insert_into(ciphers_collections::table)
.values((
ciphers_collections::cipher_uuid.eq(cipher_uuid),
ciphers_collections::collection_uuid.eq(collection_uuid),
))
.on_conflict((ciphers_collections::cipher_uuid, ciphers_collections::collection_uuid))
.do_nothing()
.execute(&**conn)
.map_res("Error adding cipher to collection")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(&collection_uuid, conn);
diesel::replace_into(ciphers_collections::table)
.values((
ciphers_collections::cipher_uuid.eq(cipher_uuid),
ciphers_collections::collection_uuid.eq(collection_uuid),
))
.execute(&**conn)
.map_res("Error adding cipher to collection")
db_run! { conn:
sqlite, mysql {
// Not checking for ForeignKey Constraints here.
// Table ciphers_collections does not have ForeignKey Constraints which would cause conflicts.
// This table has no constraints pointing to itself, but only to others.
diesel::replace_into(ciphers_collections::table)
.values((
ciphers_collections::cipher_uuid.eq(cipher_uuid),
ciphers_collections::collection_uuid.eq(collection_uuid),
))
.execute(conn)
.map_res("Error adding cipher to collection")
}
postgresql {
diesel::insert_into(ciphers_collections::table)
.values((
ciphers_collections::cipher_uuid.eq(cipher_uuid),
ciphers_collections::collection_uuid.eq(collection_uuid),
))
.on_conflict((ciphers_collections::cipher_uuid, ciphers_collections::collection_uuid))
.do_nothing()
.execute(conn)
.map_res("Error adding cipher to collection")
}
}
}
pub fn delete(cipher_uuid: &str, collection_uuid: &str, conn: &DbConn) -> EmptyResult {
Self::update_users_revision(&collection_uuid, conn);
diesel::delete(
ciphers_collections::table
.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid))
.filter(ciphers_collections::collection_uuid.eq(collection_uuid)),
)
.execute(&**conn)
.map_res("Error deleting cipher from collection")
db_run! { conn: {
diesel::delete(
ciphers_collections::table
.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid))
.filter(ciphers_collections::collection_uuid.eq(collection_uuid)),
)
.execute(conn)
.map_res("Error deleting cipher from collection")
}}
}
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid)))
.execute(&**conn)
.map_res("Error removing cipher from collections")
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
.map_res("Error removing cipher from collections")
}}
}
pub fn delete_all_by_collection(collection_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid)))
.execute(&**conn)
.map_res("Error removing ciphers from collection")
db_run! { conn: {
diesel::delete(ciphers_collections::table.filter(ciphers_collections::collection_uuid.eq(collection_uuid)))
.execute(conn)
.map_res("Error removing ciphers from collection")
}}
}
pub fn update_users_revision(collection_uuid: &str, conn: &DbConn) {

View File

@ -3,26 +3,28 @@ use chrono::{NaiveDateTime, Utc};
use super::User;
use crate::CONFIG;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "devices"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct Device {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "devices"]
#[changeset_options(treat_none_as_null="true")]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct Device {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: String,
pub user_uuid: String,
pub name: String,
/// https://github.com/bitwarden/core/tree/master/src/Core/Enums
pub atype: i32,
pub push_token: Option<String>,
pub name: String,
// https://github.com/bitwarden/core/tree/master/src/Core/Enums
pub atype: i32,
pub push_token: Option<String>,
pub refresh_token: String,
pub refresh_token: String,
pub twofactor_remember: Option<String>,
pub twofactor_remember: Option<String>,
}
}
/// Local methods
@ -105,41 +107,39 @@ impl Device {
}
}
use crate::db::schema::devices;
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods
impl Device {
#[cfg(feature = "postgresql")]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.updated_at = Utc::now().naive_utc();
crate::util::retry(
|| diesel::insert_into(devices::table).values(&*self).on_conflict(devices::uuid).do_update().set(&*self).execute(&**conn),
10,
)
.map_res("Error saving device")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
self.updated_at = Utc::now().naive_utc();
crate::util::retry(
|| diesel::replace_into(devices::table).values(&*self).execute(&**conn),
10,
)
.map_res("Error saving device")
db_run! { conn:
sqlite, mysql {
crate::util::retry(
|| diesel::replace_into(devices::table).values(DeviceDb::to_db(self)).execute(conn),
10,
).map_res("Error saving device")
}
postgresql {
let value = DeviceDb::to_db(self);
crate::util::retry(
|| diesel::insert_into(devices::table).values(&value).on_conflict(devices::uuid).do_update().set(&value).execute(conn),
10,
).map_res("Error saving device")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
diesel::delete(devices::table.filter(devices::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error removing device")
db_run! { conn: {
diesel::delete(devices::table.filter(devices::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error removing device")
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -150,23 +150,43 @@ impl Device {
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
devices::table
.filter(devices::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
devices::table
.filter(devices::uuid.eq(uuid))
.first::<DeviceDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_refresh_token(refresh_token: &str, conn: &DbConn) -> Option<Self> {
devices::table
.filter(devices::refresh_token.eq(refresh_token))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
devices::table
.filter(devices::refresh_token.eq(refresh_token))
.first::<DeviceDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
.load::<Self>(&**conn)
.expect("Error loading devices")
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
.load::<DeviceDb>(conn)
.expect("Error loading devices")
.from_db()
}}
}
pub fn find_latest_active_by_user(user_uuid: &str, conn: &DbConn) -> Option<Self> {
db_run! { conn: {
devices::table
.filter(devices::user_uuid.eq(user_uuid))
.order(devices::updated_at.desc())
.first::<DeviceDb>(conn)
.ok()
.from_db()
}}
}
}

83
src/db/models/favorite.rs Normal file
View File

@ -0,0 +1,83 @@
use super::{Cipher, User};
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "favorites"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[primary_key(user_uuid, cipher_uuid)]
pub struct Favorite {
pub user_uuid: String,
pub cipher_uuid: String,
}
}
use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
impl Favorite {
// Returns whether the specified cipher is a favorite of the specified user.
pub fn is_favorite(cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> bool {
db_run!{ conn: {
let query = favorites::table
.filter(favorites::cipher_uuid.eq(cipher_uuid))
.filter(favorites::user_uuid.eq(user_uuid))
.count();
query.first::<i64>(conn).ok().unwrap_or(0) != 0
}}
}
// Sets whether the specified cipher is a favorite of the specified user.
pub fn set_favorite(favorite: bool, cipher_uuid: &str, user_uuid: &str, conn: &DbConn) -> EmptyResult {
let (old, new) = (Self::is_favorite(cipher_uuid, user_uuid, &conn), favorite);
match (old, new) {
(false, true) => {
User::update_uuid_revision(user_uuid, &conn);
db_run!{ conn: {
diesel::insert_into(favorites::table)
.values((
favorites::user_uuid.eq(user_uuid),
favorites::cipher_uuid.eq(cipher_uuid),
))
.execute(conn)
.map_res("Error adding favorite")
}}
}
(true, false) => {
User::update_uuid_revision(user_uuid, &conn);
db_run!{ conn: {
diesel::delete(
favorites::table
.filter(favorites::user_uuid.eq(user_uuid))
.filter(favorites::cipher_uuid.eq(cipher_uuid))
)
.execute(conn)
.map_res("Error removing favorite")
}}
}
// Otherwise, the favorite status is already what it should be.
_ => Ok(())
}
}
// Delete all favorite entries associated with the specified cipher.
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
.map_res("Error removing favorites by cipher")
}}
}
// Delete all favorite entries associated with the specified user.
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(favorites::table.filter(favorites::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error removing favorites by user")
}}
}
}

View File

@ -3,26 +3,28 @@ use serde_json::Value;
use super::{Cipher, User};
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "folders"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct Folder {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: String,
pub name: String,
}
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "folders"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct Folder {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub user_uuid: String,
pub name: String,
}
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "folders_ciphers"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Folder, foreign_key = "folder_uuid")]
#[primary_key(cipher_uuid, folder_uuid)]
pub struct FolderCipher {
pub cipher_uuid: String,
pub folder_uuid: String,
#[derive(Debug, Identifiable, Queryable, Insertable, Associations)]
#[table_name = "folders_ciphers"]
#[belongs_to(Cipher, foreign_key = "cipher_uuid")]
#[belongs_to(Folder, foreign_key = "folder_uuid")]
#[primary_key(cipher_uuid, folder_uuid)]
pub struct FolderCipher {
pub cipher_uuid: String,
pub folder_uuid: String,
}
}
/// Local methods
@ -61,47 +63,58 @@ impl FolderCipher {
}
}
use crate::db::schema::{folders, folders_ciphers};
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods
impl Folder {
#[cfg(feature = "postgresql")]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
self.updated_at = Utc::now().naive_utc();
diesel::insert_into(folders::table)
.values(&*self)
.on_conflict(folders::uuid)
.do_update()
.set(&*self)
.execute(&**conn)
.map_res("Error saving folder")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
self.updated_at = Utc::now().naive_utc();
diesel::replace_into(folders::table)
.values(&*self)
.execute(&**conn)
.map_res("Error saving folder")
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(folders::table)
.values(FolderDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(folders::table)
.filter(folders::uuid.eq(&self.uuid))
.set(FolderDb::to_db(self))
.execute(conn)
.map_res("Error saving folder")
}
Err(e) => Err(e.into()),
}.map_res("Error saving folder")
}
postgresql {
let value = FolderDb::to_db(self);
diesel::insert_into(folders::table)
.values(&value)
.on_conflict(folders::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving folder")
}
}
}
pub fn delete(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
FolderCipher::delete_all_by_folder(&self.uuid, &conn)?;
diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid)))
.execute(&**conn)
.map_res("Error deleting folder")
db_run! { conn: {
diesel::delete(folders::table.filter(folders::uuid.eq(&self.uuid)))
.execute(conn)
.map_res("Error deleting folder")
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -112,73 +125,95 @@ impl Folder {
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
folders::table
.filter(folders::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
folders::table
.filter(folders::uuid.eq(uuid))
.first::<FolderDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
folders::table
.filter(folders::user_uuid.eq(user_uuid))
.load::<Self>(&**conn)
.expect("Error loading folders")
db_run! { conn: {
folders::table
.filter(folders::user_uuid.eq(user_uuid))
.load::<FolderDb>(conn)
.expect("Error loading folders")
.from_db()
}}
}
}
impl FolderCipher {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::insert_into(folders_ciphers::table)
.values(&*self)
.on_conflict((folders_ciphers::cipher_uuid, folders_ciphers::folder_uuid))
.do_nothing()
.execute(&**conn)
.map_res("Error adding cipher to folder")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::replace_into(folders_ciphers::table)
.values(&*self)
.execute(&**conn)
.map_res("Error adding cipher to folder")
db_run! { conn:
sqlite, mysql {
// Not checking for ForeignKey Constraints here.
// Table folders_ciphers does not have ForeignKey Constraints which would cause conflicts.
// This table has no constraints pointing to itself, but only to others.
diesel::replace_into(folders_ciphers::table)
.values(FolderCipherDb::to_db(self))
.execute(conn)
.map_res("Error adding cipher to folder")
}
postgresql {
diesel::insert_into(folders_ciphers::table)
.values(FolderCipherDb::to_db(self))
.on_conflict((folders_ciphers::cipher_uuid, folders_ciphers::folder_uuid))
.do_nothing()
.execute(conn)
.map_res("Error adding cipher to folder")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
diesel::delete(
folders_ciphers::table
.filter(folders_ciphers::cipher_uuid.eq(self.cipher_uuid))
.filter(folders_ciphers::folder_uuid.eq(self.folder_uuid)),
)
.execute(&**conn)
.map_res("Error removing cipher from folder")
db_run! { conn: {
diesel::delete(
folders_ciphers::table
.filter(folders_ciphers::cipher_uuid.eq(self.cipher_uuid))
.filter(folders_ciphers::folder_uuid.eq(self.folder_uuid)),
)
.execute(conn)
.map_res("Error removing cipher from folder")
}}
}
pub fn delete_all_by_cipher(cipher_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid)))
.execute(&**conn)
.map_res("Error removing cipher from folders")
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid)))
.execute(conn)
.map_res("Error removing cipher from folders")
}}
}
pub fn delete_all_by_folder(folder_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid)))
.execute(&**conn)
.map_res("Error removing ciphers from folder")
db_run! { conn: {
diesel::delete(folders_ciphers::table.filter(folders_ciphers::folder_uuid.eq(folder_uuid)))
.execute(conn)
.map_res("Error removing ciphers from folder")
}}
}
pub fn find_by_folder_and_cipher(folder_uuid: &str, cipher_uuid: &str, conn: &DbConn) -> Option<Self> {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
.filter(folders_ciphers::cipher_uuid.eq(cipher_uuid))
.first::<FolderCipherDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_folder(folder_uuid: &str, conn: &DbConn) -> Vec<Self> {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
.load::<Self>(&**conn)
.expect("Error loading folders")
db_run! { conn: {
folders_ciphers::table
.filter(folders_ciphers::folder_uuid.eq(folder_uuid))
.load::<FolderCipherDb>(conn)
.expect("Error loading folders")
.from_db()
}}
}
}

View File

@ -1,21 +1,21 @@
mod attachment;
mod cipher;
mod device;
mod folder;
mod user;
mod collection;
mod device;
mod favorite;
mod folder;
mod org_policy;
mod organization;
mod two_factor;
mod org_policy;
mod user;
pub use self::attachment::Attachment;
pub use self::cipher::Cipher;
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
pub use self::device::Device;
pub use self::favorite::Favorite;
pub use self::folder::{Folder, FolderCipher};
pub use self::organization::Organization;
pub use self::organization::{UserOrgStatus, UserOrgType, UserOrganization};
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
pub use self::two_factor::{TwoFactor, TwoFactorType};
pub use self::user::{Invitation, User};
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
pub use self::user::{Invitation, User, UserStampException};

View File

@ -1,23 +1,23 @@
use diesel::prelude::*;
use serde_json::Value;
use crate::api::EmptyResult;
use crate::db::schema::org_policies;
use crate::db::DbConn;
use crate::error::MapResult;
use super::Organization;
use super::{Organization, UserOrgStatus};
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "org_policies"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct OrgPolicy {
pub uuid: String,
pub org_uuid: String,
pub atype: i32,
pub enabled: bool,
pub data: String,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "org_policies"]
#[belongs_to(Organization, foreign_key = "org_uuid")]
#[primary_key(uuid)]
pub struct OrgPolicy {
pub uuid: String,
pub org_uuid: String,
pub atype: i32,
pub enabled: bool,
pub data: String,
}
}
#[allow(dead_code)]
@ -55,87 +55,119 @@ impl OrgPolicy {
/// Database methods
impl OrgPolicy {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
// We need to make sure we're not going to violate the unique constraint on org_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(
org_policies::table
.filter(org_policies::org_uuid.eq(&self.org_uuid))
.filter(org_policies::atype.eq(&self.atype)),
)
.execute(&**conn)
.map_res("Error deleting org_policy for insert")?;
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(org_policies::table)
.values(OrgPolicyDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(org_policies::table)
.filter(org_policies::uuid.eq(&self.uuid))
.set(OrgPolicyDb::to_db(self))
.execute(conn)
.map_res("Error saving org_policy")
}
Err(e) => Err(e.into()),
}.map_res("Error saving org_policy")
}
postgresql {
let value = OrgPolicyDb::to_db(self);
// We need to make sure we're not going to violate the unique constraint on org_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(
org_policies::table
.filter(org_policies::org_uuid.eq(&self.org_uuid))
.filter(org_policies::atype.eq(&self.atype)),
)
.execute(conn)
.map_res("Error deleting org_policy for insert")?;
diesel::insert_into(org_policies::table)
.values(self)
.on_conflict(org_policies::uuid)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error saving org_policy")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::replace_into(org_policies::table)
.values(&*self)
.execute(&**conn)
.map_res("Error saving org_policy")
diesel::insert_into(org_policies::table)
.values(&value)
.on_conflict(org_policies::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving org_policy")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
diesel::delete(org_policies::table.filter(org_policies::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error deleting org_policy")
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error deleting org_policy")
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
org_policies::table
.filter(org_policies::uuid.eq(uuid))
.first::<OrgPolicyDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.load::<Self>(&**conn)
.expect("Error loading org_policy")
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.load::<OrgPolicyDb>(conn)
.expect("Error loading org_policy")
.from_db()
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
use crate::db::schema::users_organizations;
org_policies::table
.left_join(
users_organizations::table.on(
users_organizations::org_uuid.eq(org_policies::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.select(org_policies::all_columns)
.load::<Self>(&**conn)
.expect("Error loading org_policy")
db_run! { conn: {
org_policies::table
.inner_join(
users_organizations::table.on(
users_organizations::org_uuid.eq(org_policies::org_uuid)
.and(users_organizations::user_uuid.eq(user_uuid)))
)
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.select(org_policies::all_columns)
.load::<OrgPolicyDb>(conn)
.expect("Error loading org_policy")
.from_db()
}}
}
pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.filter(org_policies::atype.eq(atype))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
org_policies::table
.filter(org_policies::org_uuid.eq(org_uuid))
.filter(org_policies::atype.eq(atype))
.first::<OrgPolicyDb>(conn)
.ok()
.from_db()
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(&**conn)
.map_res("Error deleting org_policy")
db_run! { conn: {
diesel::delete(org_policies::table.filter(org_policies::org_uuid.eq(org_uuid)))
.execute(conn)
.map_res("Error deleting org_policy")
}}
}
/*pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(&**conn)
.map_res("Error deleting twofactors")
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error deleting twofactors")
}}
}*/
}

View File

@ -4,27 +4,29 @@ use num_traits::FromPrimitive;
use super::{CollectionUser, User, OrgPolicy};
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "organizations"]
#[primary_key(uuid)]
pub struct Organization {
pub uuid: String,
pub name: String,
pub billing_email: String,
}
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "organizations"]
#[primary_key(uuid)]
pub struct Organization {
pub uuid: String,
pub name: String,
pub billing_email: String,
}
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users_organizations"]
#[primary_key(uuid)]
pub struct UserOrganization {
pub uuid: String,
pub user_uuid: String,
pub org_uuid: String,
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users_organizations"]
#[primary_key(uuid)]
pub struct UserOrganization {
pub uuid: String,
pub user_uuid: String,
pub org_uuid: String,
pub access_all: bool,
pub akey: String,
pub status: i32,
pub atype: i32,
pub access_all: bool,
pub akey: String,
pub status: i32,
pub atype: i32,
}
}
pub enum UserOrgStatus {
@ -42,24 +44,28 @@ pub enum UserOrgType {
Manager = 3,
}
impl UserOrgType {
pub fn from_str(s: &str) -> Option<Self> {
match s {
"0" | "Owner" => Some(UserOrgType::Owner),
"1" | "Admin" => Some(UserOrgType::Admin),
"2" | "User" => Some(UserOrgType::User),
"3" | "Manager" => Some(UserOrgType::Manager),
_ => None,
}
}
}
impl Ord for UserOrgType {
fn cmp(&self, other: &UserOrgType) -> Ordering {
if self == other {
Ordering::Equal
} else {
match self {
UserOrgType::Owner => Ordering::Greater,
UserOrgType::Admin => match other {
UserOrgType::Owner => Ordering::Less,
_ => Ordering::Greater,
},
UserOrgType::Manager => match other {
UserOrgType::Owner | UserOrgType::Admin => Ordering::Less,
_ => Ordering::Greater,
},
UserOrgType::User => Ordering::Less,
}
}
// For easy comparison, map each variant to an access level (where 0 is lowest).
static ACCESS_LEVEL: [i32; 4] = [
3, // Owner
2, // Admin
0, // User
1, // Manager
];
ACCESS_LEVEL[*self as usize].cmp(&ACCESS_LEVEL[*other as usize])
}
}
@ -127,18 +133,6 @@ impl PartialOrd<UserOrgType> for i32 {
}
}
impl UserOrgType {
pub fn from_str(s: &str) -> Option<Self> {
match s {
"0" | "Owner" => Some(UserOrgType::Owner),
"1" | "Admin" => Some(UserOrgType::Admin),
"2" | "User" => Some(UserOrgType::User),
"3" | "Manager" => Some(UserOrgType::Manager),
_ => None,
}
}
}
/// Local methods
impl Organization {
pub fn new(name: String, billing_email: String) -> Self {
@ -196,16 +190,13 @@ impl UserOrganization {
}
}
use crate::db::schema::{ciphers_collections, organizations, users_collections, users_organizations};
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
/// Database methods
impl Organization {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
UserOrganization::find_by_org(&self.uuid, conn)
.iter()
@ -213,27 +204,36 @@ impl Organization {
User::update_uuid_revision(&user_org.user_uuid, conn);
});
diesel::insert_into(organizations::table)
.values(self)
.on_conflict(organizations::uuid)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error saving organization")
}
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(organizations::table)
.values(OrganizationDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(organizations::table)
.filter(organizations::uuid.eq(&self.uuid))
.set(OrganizationDb::to_db(self))
.execute(conn)
.map_res("Error saving organization")
}
Err(e) => Err(e.into()),
}.map_res("Error saving organization")
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
UserOrganization::find_by_org(&self.uuid, conn)
.iter()
.for_each(|user_org| {
User::update_uuid_revision(&user_org.user_uuid, conn);
});
diesel::replace_into(organizations::table)
.values(self)
.execute(&**conn)
.map_res("Error saving organization")
}
postgresql {
let value = OrganizationDb::to_db(self);
diesel::insert_into(organizations::table)
.values(&value)
.on_conflict(organizations::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving organization")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
@ -244,27 +244,34 @@ impl Organization {
UserOrganization::delete_all_by_organization(&self.uuid, &conn)?;
OrgPolicy::delete_all_by_organization(&self.uuid, &conn)?;
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error saving organization")
db_run! { conn: {
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error saving organization")
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
organizations::table
.filter(organizations::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
organizations::table
.filter(organizations::uuid.eq(uuid))
.first::<OrganizationDb>(conn)
.ok().from_db()
}}
}
pub fn get_all(conn: &DbConn) -> Vec<Self> {
organizations::table.load::<Self>(&**conn).expect("Error loading organizations")
db_run! { conn: {
organizations::table.load::<OrganizationDb>(conn).expect("Error loading organizations").from_db()
}}
}
}
impl UserOrganization {
pub fn to_json(&self, conn: &DbConn) -> Value {
let org = Organization::find_by_uuid(&self.org_uuid, conn).unwrap();
json!({
"Id": self.org_uuid,
"Name": org.name,
@ -345,38 +352,50 @@ impl UserOrganization {
"Object": "organizationUserDetails",
})
}
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
diesel::insert_into(users_organizations::table)
.values(self)
.on_conflict(users_organizations::uuid)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error adding user to organization")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
diesel::replace_into(users_organizations::table)
.values(self)
.execute(&**conn)
.map_res("Error adding user to organization")
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(users_organizations::table)
.values(UserOrganizationDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(users_organizations::table)
.filter(users_organizations::uuid.eq(&self.uuid))
.set(UserOrganizationDb::to_db(self))
.execute(conn)
.map_res("Error adding user to organization")
}
Err(e) => Err(e.into()),
}.map_res("Error adding user to organization")
}
postgresql {
let value = UserOrganizationDb::to_db(self);
diesel::insert_into(users_organizations::table)
.values(&value)
.on_conflict(users_organizations::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error adding user to organization")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
User::update_uuid_revision(&self.user_uuid, conn);
CollectionUser::delete_all_by_user(&self.user_uuid, &conn)?;
CollectionUser::delete_all_by_user_and_org(&self.user_uuid, &self.org_uuid, &conn)?;
diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error removing user from organization")
db_run! { conn: {
diesel::delete(users_organizations::table.filter(users_organizations::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error removing user from organization")
}}
}
pub fn delete_all_by_organization(org_uuid: &str, conn: &DbConn) -> EmptyResult {
@ -403,107 +422,142 @@ impl UserOrganization {
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
.first::<UserOrganizationDb>(conn)
.ok().from_db()
}}
}
pub fn find_by_uuid_and_org(uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
.filter(users_organizations::org_uuid.eq(org_uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
users_organizations::table
.filter(users_organizations::uuid.eq(uuid))
.filter(users_organizations::org_uuid.eq(org_uuid))
.first::<UserOrganizationDb>(conn)
.ok().from_db()
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.load::<Self>(&**conn)
.unwrap_or_default()
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.load::<UserOrganizationDb>(conn)
.unwrap_or_default().from_db()
}}
}
pub fn find_invited_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Invited as i32))
.load::<Self>(&**conn)
.unwrap_or_default()
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Invited as i32))
.load::<UserOrganizationDb>(conn)
.unwrap_or_default().from_db()
}}
}
pub fn find_any_state_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.load::<Self>(&**conn)
.unwrap_or_default()
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.load::<UserOrganizationDb>(conn)
.unwrap_or_default().from_db()
}}
}
pub fn find_by_org(org_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.load::<Self>(&**conn)
.expect("Error loading user organizations")
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.load::<UserOrganizationDb>(conn)
.expect("Error loading user organizations").from_db()
}}
}
pub fn count_by_org(org_uuid: &str, conn: &DbConn) -> i64 {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.count()
.first::<i64>(&**conn)
.ok()
.unwrap_or(0)
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.count()
.first::<i64>(conn)
.ok()
.unwrap_or(0)
}}
}
pub fn find_by_org_and_type(org_uuid: &str, atype: i32, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.filter(users_organizations::atype.eq(atype))
.load::<Self>(&**conn)
.expect("Error loading user organizations")
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.filter(users_organizations::atype.eq(atype))
.load::<UserOrganizationDb>(conn)
.expect("Error loading user organizations").from_db()
}}
}
pub fn find_by_user_and_org(user_uuid: &str, org_uuid: &str, conn: &DbConn) -> Option<Self> {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::org_uuid.eq(org_uuid))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::org_uuid.eq(org_uuid))
.first::<UserOrganizationDb>(conn)
.ok().from_db()
}}
}
pub fn find_by_cipher_and_org(cipher_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.left_join(users_collections::table.on(
users_collections::user_uuid.eq(users_organizations::user_uuid)
))
.left_join(ciphers_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid).and(
ciphers_collections::cipher_uuid.eq(&cipher_uuid)
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.left_join(users_collections::table.on(
users_collections::user_uuid.eq(users_organizations::user_uuid)
))
.left_join(ciphers_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid).and(
ciphers_collections::cipher_uuid.eq(&cipher_uuid)
)
))
.filter(
users_organizations::access_all.eq(true).or( // AccessAll..
ciphers_collections::cipher_uuid.eq(&cipher_uuid) // ..or access to collection with cipher
)
)
))
.filter(
users_organizations::access_all.eq(true).or( // AccessAll..
ciphers_collections::cipher_uuid.eq(&cipher_uuid) // ..or access to collection with cipher
)
)
.select(users_organizations::all_columns)
.load::<Self>(&**conn).expect("Error loading user organizations")
.select(users_organizations::all_columns)
.load::<UserOrganizationDb>(conn).expect("Error loading user organizations").from_db()
}}
}
pub fn find_by_collection_and_org(collection_uuid: &str, org_uuid: &str, conn: &DbConn) -> Vec<Self> {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.left_join(users_collections::table.on(
users_collections::user_uuid.eq(users_organizations::user_uuid)
))
.filter(
users_organizations::access_all.eq(true).or( // AccessAll..
users_collections::collection_uuid.eq(&collection_uuid) // ..or access to collection with cipher
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.left_join(users_collections::table.on(
users_collections::user_uuid.eq(users_organizations::user_uuid)
))
.filter(
users_organizations::access_all.eq(true).or( // AccessAll..
users_collections::collection_uuid.eq(&collection_uuid) // ..or access to collection with cipher
)
)
)
.select(users_organizations::all_columns)
.load::<Self>(&**conn).expect("Error loading user organizations")
.select(users_organizations::all_columns)
.load::<UserOrganizationDb>(conn).expect("Error loading user organizations").from_db()
}}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[allow(non_snake_case)]
fn partial_cmp_UserOrgType() {
assert!(UserOrgType::Owner > UserOrgType::Admin);
assert!(UserOrgType::Admin > UserOrgType::Manager);
assert!(UserOrgType::Manager > UserOrgType::User);
}
}

View File

@ -1,24 +1,24 @@
use diesel::prelude::*;
use serde_json::Value;
use crate::api::EmptyResult;
use crate::db::schema::twofactor;
use crate::db::DbConn;
use crate::error::MapResult;
use super::User;
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "twofactor"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct TwoFactor {
pub uuid: String,
pub user_uuid: String,
pub atype: i32,
pub enabled: bool,
pub data: String,
pub last_used: i32,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
#[table_name = "twofactor"]
#[belongs_to(User, foreign_key = "user_uuid")]
#[primary_key(uuid)]
pub struct TwoFactor {
pub uuid: String,
pub user_uuid: String,
pub atype: i32,
pub enabled: bool,
pub data: String,
pub last_used: i32,
}
}
#[allow(dead_code)]
@ -70,57 +70,80 @@ impl TwoFactor {
/// Database methods
impl TwoFactor {
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
// We need to make sure we're not going to violate the unique constraint on user_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(&self.user_uuid)).filter(twofactor::atype.eq(&self.atype)))
.execute(&**conn)
.map_res("Error deleting twofactor for insert")?;
db_run! { conn:
sqlite, mysql {
match diesel::replace_into(twofactor::table)
.values(TwoFactorDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(twofactor::table)
.filter(twofactor::uuid.eq(&self.uuid))
.set(TwoFactorDb::to_db(self))
.execute(conn)
.map_res("Error saving twofactor")
}
Err(e) => Err(e.into()),
}.map_res("Error saving twofactor")
}
postgresql {
let value = TwoFactorDb::to_db(self);
// We need to make sure we're not going to violate the unique constraint on user_uuid and atype.
// This happens automatically on other DBMS backends due to replace_into(). PostgreSQL does
// not support multiple constraints on ON CONFLICT clauses.
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(&self.user_uuid)).filter(twofactor::atype.eq(&self.atype)))
.execute(conn)
.map_res("Error deleting twofactor for insert")?;
diesel::insert_into(twofactor::table)
.values(self)
.on_conflict(twofactor::uuid)
.do_update()
.set(self)
.execute(&**conn)
.map_res("Error saving twofactor")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
diesel::replace_into(twofactor::table)
.values(self)
.execute(&**conn)
.map_res("Error saving twofactor")
diesel::insert_into(twofactor::table)
.values(&value)
.on_conflict(twofactor::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving twofactor")
}
}
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
diesel::delete(twofactor::table.filter(twofactor::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error deleting twofactor")
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error deleting twofactor")
}}
}
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
.filter(twofactor::atype.lt(1000)) // Filter implementation types
.load::<Self>(&**conn)
.expect("Error loading twofactor")
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
.filter(twofactor::atype.lt(1000)) // Filter implementation types
.load::<TwoFactorDb>(conn)
.expect("Error loading twofactor")
.from_db()
}}
}
pub fn find_by_user_and_type(user_uuid: &str, atype: i32, conn: &DbConn) -> Option<Self> {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
.filter(twofactor::atype.eq(atype))
.first::<Self>(&**conn)
.ok()
db_run! { conn: {
twofactor::table
.filter(twofactor::user_uuid.eq(user_uuid))
.filter(twofactor::atype.eq(atype))
.first::<TwoFactorDb>(conn)
.ok()
.from_db()
}}
}
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(&**conn)
.map_res("Error deleting twofactors")
db_run! { conn: {
diesel::delete(twofactor::table.filter(twofactor::user_uuid.eq(user_uuid)))
.execute(conn)
.map_res("Error deleting twofactors")
}}
}
}

View File

@ -4,43 +4,55 @@ use serde_json::Value;
use crate::crypto;
use crate::CONFIG;
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
pub struct User {
pub uuid: String,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub verified_at: Option<NaiveDateTime>,
pub last_verifying_at: Option<NaiveDateTime>,
pub login_verify_count: i32,
db_object! {
#[derive(Debug, Identifiable, Queryable, Insertable, AsChangeset)]
#[table_name = "users"]
#[changeset_options(treat_none_as_null="true")]
#[primary_key(uuid)]
pub struct User {
pub uuid: String,
pub enabled: bool,
pub created_at: NaiveDateTime,
pub updated_at: NaiveDateTime,
pub verified_at: Option<NaiveDateTime>,
pub last_verifying_at: Option<NaiveDateTime>,
pub login_verify_count: i32,
pub email: String,
pub email_new: Option<String>,
pub email_new_token: Option<String>,
pub name: String,
pub email: String,
pub email_new: Option<String>,
pub email_new_token: Option<String>,
pub name: String,
pub password_hash: Vec<u8>,
pub salt: Vec<u8>,
pub password_iterations: i32,
pub password_hint: Option<String>,
pub password_hash: Vec<u8>,
pub salt: Vec<u8>,
pub password_iterations: i32,
pub password_hint: Option<String>,
pub akey: String,
pub private_key: Option<String>,
pub public_key: Option<String>,
pub akey: String,
pub private_key: Option<String>,
pub public_key: Option<String>,
#[column_name = "totp_secret"]
_totp_secret: Option<String>,
pub totp_recover: Option<String>,
#[column_name = "totp_secret"] // Note, this is only added to the UserDb structs, not to User
_totp_secret: Option<String>,
pub totp_recover: Option<String>,
pub security_stamp: String,
pub security_stamp: String,
pub stamp_exception: Option<String>,
pub equivalent_domains: String,
pub excluded_globals: String,
pub equivalent_domains: String,
pub excluded_globals: String,
pub client_kdf_type: i32,
pub client_kdf_iter: i32,
pub client_kdf_type: i32,
pub client_kdf_iter: i32,
}
#[derive(Debug, Identifiable, Queryable, Insertable)]
#[table_name = "invitations"]
#[primary_key(email)]
pub struct Invitation {
pub email: String,
}
}
enum UserStatus {
@ -49,6 +61,12 @@ enum UserStatus {
_Disabled = 2,
}
#[derive(Serialize, Deserialize)]
pub struct UserStampException {
pub route: String,
pub security_stamp: String
}
/// Local methods
impl User {
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
@ -60,6 +78,7 @@ impl User {
Self {
uuid: crate::util::get_uuid(),
enabled: true,
created_at: now,
updated_at: now,
verified_at: None,
@ -76,6 +95,7 @@ impl User {
password_iterations: CONFIG.password_iterations(),
security_stamp: crate::util::get_uuid(),
stamp_exception: None,
password_hint: None,
private_key: None,
@ -109,19 +129,56 @@ impl User {
}
}
pub fn set_password(&mut self, password: &str) {
/// Set the password hash generated
/// And resets the security_stamp. Based upon the allow_next_route the security_stamp will be different.
///
/// # Arguments
///
/// * `password` - A str which contains a hashed version of the users master password.
/// * `allow_next_route` - A Option<&str> with the function name of the next allowed (rocket) route.
///
pub fn set_password(&mut self, password: &str, allow_next_route: Option<&str>) {
self.password_hash = crypto::hash_password(password.as_bytes(), &self.salt, self.password_iterations as u32);
if let Some(route) = allow_next_route {
self.set_stamp_exception(route);
}
self.reset_security_stamp()
}
pub fn reset_security_stamp(&mut self) {
self.security_stamp = crate::util::get_uuid();
}
/// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp.
///
/// # Arguments
/// * `route_exception` - A str with the function name of the next allowed (rocket) route.
///
/// ### Future
/// In the future it could be posible that we need more of these exception routes.
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
pub fn set_stamp_exception(&mut self, route_exception: &str) {
let stamp_exception = UserStampException {
route: route_exception.to_string(),
security_stamp: self.security_stamp.to_string()
};
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
}
/// Resets the stamp_exception to prevent re-use of the previous security-stamp
///
/// ### Future
/// In the future it could be posible that we need more of these exception routes.
/// In that case we could use an Vec<UserStampException> and add multiple exceptions.
pub fn reset_stamp_exception(&mut self) {
self.stamp_exception = None;
}
}
use super::{Cipher, Device, Folder, TwoFactor, UserOrgType, UserOrganization};
use crate::db::schema::{invitations, users};
use super::{Cipher, Device, Favorite, Folder, TwoFactor, UserOrgType, UserOrganization};
use crate::db::DbConn;
use diesel::prelude::*;
use crate::api::EmptyResult;
use crate::error::MapResult;
@ -158,7 +215,6 @@ impl User {
})
}
#[cfg(feature = "postgresql")]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("User email can't be empty")
@ -166,49 +222,60 @@ impl User {
self.updated_at = Utc::now().naive_utc();
diesel::insert_into(users::table) // Insert or update
.values(&*self)
.on_conflict(users::uuid)
.do_update()
.set(&*self)
.execute(&**conn)
.map_res("Error saving user")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("User email can't be empty")
db_run! {conn:
sqlite, mysql {
match diesel::replace_into(users::table)
.values(UserDb::to_db(self))
.execute(conn)
{
Ok(_) => Ok(()),
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
diesel::update(users::table)
.filter(users::uuid.eq(&self.uuid))
.set(UserDb::to_db(self))
.execute(conn)
.map_res("Error saving user")
}
Err(e) => Err(e.into()),
}.map_res("Error saving user")
}
postgresql {
let value = UserDb::to_db(self);
diesel::insert_into(users::table) // Insert or update
.values(&value)
.on_conflict(users::uuid)
.do_update()
.set(&value)
.execute(conn)
.map_res("Error saving user")
}
}
self.updated_at = Utc::now().naive_utc();
diesel::replace_into(users::table) // Insert or update
.values(&*self)
.execute(&**conn)
.map_res("Error saving user")
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
for user_org in UserOrganization::find_by_user(&self.uuid, &*conn) {
for user_org in UserOrganization::find_by_user(&self.uuid, conn) {
if user_org.atype == UserOrgType::Owner {
let owner_type = UserOrgType::Owner as i32;
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, &conn).len() <= 1 {
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 {
err!("Can't delete last owner")
}
}
}
UserOrganization::delete_all_by_user(&self.uuid, &*conn)?;
Cipher::delete_all_by_user(&self.uuid, &*conn)?;
Folder::delete_all_by_user(&self.uuid, &*conn)?;
Device::delete_all_by_user(&self.uuid, &*conn)?;
TwoFactor::delete_all_by_user(&self.uuid, &*conn)?;
Invitation::take(&self.email, &*conn); // Delete invitation if any
UserOrganization::delete_all_by_user(&self.uuid, conn)?;
Cipher::delete_all_by_user(&self.uuid, conn)?;
Favorite::delete_all_by_user(&self.uuid, conn)?;
Folder::delete_all_by_user(&self.uuid, conn)?;
Device::delete_all_by_user(&self.uuid, conn)?;
TwoFactor::delete_all_by_user(&self.uuid, conn)?;
Invitation::take(&self.email, conn); // Delete invitation if any
diesel::delete(users::table.filter(users::uuid.eq(self.uuid)))
.execute(&**conn)
.map_res("Error deleting user")
db_run! {conn: {
diesel::delete(users::table.filter(users::uuid.eq(self.uuid)))
.execute(conn)
.map_res("Error deleting user")
}}
}
pub fn update_uuid_revision(uuid: &str, conn: &DbConn) {
@ -220,15 +287,14 @@ impl User {
pub fn update_all_revisions(conn: &DbConn) -> EmptyResult {
let updated_at = Utc::now().naive_utc();
crate::util::retry(
|| {
db_run! {conn: {
crate::util::retry(|| {
diesel::update(users::table)
.set(users::updated_at.eq(updated_at))
.execute(&**conn)
},
10,
)
.map_res("Error updating revision date for all users")
.execute(conn)
}, 10)
.map_res("Error updating revision date for all users")
}}
}
pub fn update_revision(&mut self, conn: &DbConn) -> EmptyResult {
@ -238,39 +304,45 @@ impl User {
}
fn _update_revision(uuid: &str, date: &NaiveDateTime, conn: &DbConn) -> EmptyResult {
crate::util::retry(
|| {
db_run! {conn: {
crate::util::retry(|| {
diesel::update(users::table.filter(users::uuid.eq(uuid)))
.set(users::updated_at.eq(date))
.execute(&**conn)
},
10,
)
.map_res("Error updating user revision")
.execute(conn)
}, 10)
.map_res("Error updating user revision")
}}
}
pub fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
let lower_mail = mail.to_lowercase();
users::table
.filter(users::email.eq(lower_mail))
.first::<Self>(&**conn)
.ok()
db_run! {conn: {
users::table
.filter(users::email.eq(lower_mail))
.first::<UserDb>(conn)
.ok()
.from_db()
}}
}
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
users::table.filter(users::uuid.eq(uuid)).first::<Self>(&**conn).ok()
db_run! {conn: {
users::table.filter(users::uuid.eq(uuid)).first::<UserDb>(conn).ok().from_db()
}}
}
pub fn get_all(conn: &DbConn) -> Vec<Self> {
users::table.load::<Self>(&**conn).expect("Error loading users")
db_run! {conn: {
users::table.load::<UserDb>(conn).expect("Error loading users").from_db()
}}
}
}
#[derive(Debug, Identifiable, Queryable, Insertable)]
#[table_name = "invitations"]
#[primary_key(email)]
pub struct Invitation {
pub email: String,
pub fn last_active(&self, conn: &DbConn) -> Option<NaiveDateTime> {
match Device::find_latest_active_by_user(&self.uuid, conn) {
Some(device) => Some(device.updated_at),
None => None
}
}
}
impl Invitation {
@ -278,44 +350,48 @@ impl Invitation {
Self { email }
}
#[cfg(feature = "postgresql")]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("Invitation email can't be empty")
}
diesel::insert_into(invitations::table)
.values(self)
.on_conflict(invitations::email)
.do_nothing()
.execute(&**conn)
.map_res("Error saving invitation")
}
#[cfg(not(feature = "postgresql"))]
pub fn save(&self, conn: &DbConn) -> EmptyResult {
if self.email.trim().is_empty() {
err!("Invitation email can't be empty")
db_run! {conn:
sqlite, mysql {
// Not checking for ForeignKey Constraints here
// Table invitations does not have any ForeignKey Constraints.
diesel::replace_into(invitations::table)
.values(InvitationDb::to_db(self))
.execute(conn)
.map_res("Error saving invitation")
}
postgresql {
diesel::insert_into(invitations::table)
.values(InvitationDb::to_db(self))
.on_conflict(invitations::email)
.do_nothing()
.execute(conn)
.map_res("Error saving invitation")
}
}
diesel::replace_into(invitations::table)
.values(self)
.execute(&**conn)
.map_res("Error saving invitation")
}
pub fn delete(self, conn: &DbConn) -> EmptyResult {
diesel::delete(invitations::table.filter(invitations::email.eq(self.email)))
.execute(&**conn)
.map_res("Error deleting invitation")
db_run! {conn: {
diesel::delete(invitations::table.filter(invitations::email.eq(self.email)))
.execute(conn)
.map_res("Error deleting invitation")
}}
}
pub fn find_by_mail(mail: &str, conn: &DbConn) -> Option<Self> {
let lower_mail = mail.to_lowercase();
invitations::table
.filter(invitations::email.eq(lower_mail))
.first::<Self>(&**conn)
.ok()
db_run! {conn: {
invitations::table
.filter(invitations::email.eq(lower_mail))
.first::<InvitationDb>(conn)
.ok()
.from_db()
}}
}
pub fn take(mail: &str, conn: &DbConn) -> bool {

View File

@ -1,7 +1,7 @@
table! {
attachments (id) {
id -> Varchar,
cipher_uuid -> Varchar,
id -> Text,
cipher_uuid -> Text,
file_name -> Text,
file_size -> Integer,
akey -> Nullable<Text>,
@ -10,17 +10,16 @@ table! {
table! {
ciphers (uuid) {
uuid -> Varchar,
uuid -> Text,
created_at -> Datetime,
updated_at -> Datetime,
user_uuid -> Nullable<Varchar>,
organization_uuid -> Nullable<Varchar>,
user_uuid -> Nullable<Text>,
organization_uuid -> Nullable<Text>,
atype -> Integer,
name -> Text,
notes -> Nullable<Text>,
fields -> Nullable<Text>,
data -> Text,
favorite -> Bool,
password_history -> Nullable<Text>,
deleted_at -> Nullable<Datetime>,
}
@ -28,25 +27,25 @@ table! {
table! {
ciphers_collections (cipher_uuid, collection_uuid) {
cipher_uuid -> Varchar,
collection_uuid -> Varchar,
cipher_uuid -> Text,
collection_uuid -> Text,
}
}
table! {
collections (uuid) {
uuid -> Varchar,
org_uuid -> Varchar,
uuid -> Text,
org_uuid -> Text,
name -> Text,
}
}
table! {
devices (uuid) {
uuid -> Varchar,
uuid -> Text,
created_at -> Datetime,
updated_at -> Datetime,
user_uuid -> Varchar,
user_uuid -> Text,
name -> Text,
atype -> Integer,
push_token -> Nullable<Text>,
@ -55,33 +54,40 @@ table! {
}
}
table! {
favorites (user_uuid, cipher_uuid) {
user_uuid -> Text,
cipher_uuid -> Text,
}
}
table! {
folders (uuid) {
uuid -> Varchar,
uuid -> Text,
created_at -> Datetime,
updated_at -> Datetime,
user_uuid -> Varchar,
user_uuid -> Text,
name -> Text,
}
}
table! {
folders_ciphers (cipher_uuid, folder_uuid) {
cipher_uuid -> Varchar,
folder_uuid -> Varchar,
cipher_uuid -> Text,
folder_uuid -> Text,
}
}
table! {
invitations (email) {
email -> Varchar,
email -> Text,
}
}
table! {
org_policies (uuid) {
uuid -> Varchar,
org_uuid -> Varchar,
uuid -> Text,
org_uuid -> Text,
atype -> Integer,
enabled -> Bool,
data -> Text,
@ -90,7 +96,7 @@ table! {
table! {
organizations (uuid) {
uuid -> Varchar,
uuid -> Text,
name -> Text,
billing_email -> Text,
}
@ -98,8 +104,8 @@ table! {
table! {
twofactor (uuid) {
uuid -> Varchar,
user_uuid -> Varchar,
uuid -> Text,
user_uuid -> Text,
atype -> Integer,
enabled -> Bool,
data -> Text,
@ -109,18 +115,19 @@ table! {
table! {
users (uuid) {
uuid -> Varchar,
uuid -> Text,
enabled -> Bool,
created_at -> Datetime,
updated_at -> Datetime,
verified_at -> Nullable<Datetime>,
last_verifying_at -> Nullable<Datetime>,
login_verify_count -> Integer,
email -> Varchar,
email_new -> Nullable<Varchar>,
email_new_token -> Nullable<Varchar>,
email -> Text,
email_new -> Nullable<Text>,
email_new_token -> Nullable<Text>,
name -> Text,
password_hash -> Blob,
salt -> Blob,
password_hash -> Binary,
salt -> Binary,
password_iterations -> Integer,
password_hint -> Nullable<Text>,
akey -> Text,
@ -129,6 +136,7 @@ table! {
totp_secret -> Nullable<Text>,
totp_recover -> Nullable<Text>,
security_stamp -> Text,
stamp_exception -> Nullable<Text>,
equivalent_domains -> Text,
excluded_globals -> Text,
client_kdf_type -> Integer,
@ -138,8 +146,8 @@ table! {
table! {
users_collections (user_uuid, collection_uuid) {
user_uuid -> Varchar,
collection_uuid -> Varchar,
user_uuid -> Text,
collection_uuid -> Text,
read_only -> Bool,
hide_passwords -> Bool,
}
@ -147,9 +155,9 @@ table! {
table! {
users_organizations (uuid) {
uuid -> Varchar,
user_uuid -> Varchar,
org_uuid -> Varchar,
uuid -> Text,
user_uuid -> Text,
org_uuid -> Text,
access_all -> Bool,
akey -> Text,
status -> Integer,

View File

@ -20,7 +20,6 @@ table! {
notes -> Nullable<Text>,
fields -> Nullable<Text>,
data -> Text,
favorite -> Bool,
password_history -> Nullable<Text>,
deleted_at -> Nullable<Timestamp>,
}
@ -55,6 +54,13 @@ table! {
}
}
table! {
favorites (user_uuid, cipher_uuid) {
user_uuid -> Text,
cipher_uuid -> Text,
}
}
table! {
folders (uuid) {
uuid -> Text,
@ -110,6 +116,7 @@ table! {
table! {
users (uuid) {
uuid -> Text,
enabled -> Bool,
created_at -> Timestamp,
updated_at -> Timestamp,
verified_at -> Nullable<Timestamp>,
@ -129,6 +136,7 @@ table! {
totp_secret -> Nullable<Text>,
totp_recover -> Nullable<Text>,
security_stamp -> Text,
stamp_exception -> Nullable<Text>,
equivalent_domains -> Text,
excluded_globals -> Text,
client_kdf_type -> Integer,

View File

@ -20,7 +20,6 @@ table! {
notes -> Nullable<Text>,
fields -> Nullable<Text>,
data -> Text,
favorite -> Bool,
password_history -> Nullable<Text>,
deleted_at -> Nullable<Timestamp>,
}
@ -55,6 +54,13 @@ table! {
}
}
table! {
favorites (user_uuid, cipher_uuid) {
user_uuid -> Text,
cipher_uuid -> Text,
}
}
table! {
folders (uuid) {
uuid -> Text,
@ -110,6 +116,7 @@ table! {
table! {
users (uuid) {
uuid -> Text,
enabled -> Bool,
created_at -> Timestamp,
updated_at -> Timestamp,
verified_at -> Nullable<Timestamp>,
@ -129,6 +136,7 @@ table! {
totp_secret -> Nullable<Text>,
totp_recover -> Nullable<Text>,
security_stamp -> Text,
stamp_exception -> Nullable<Text>,
equivalent_domains -> Text,
excluded_globals -> Text,
client_kdf_type -> Integer,

View File

@ -34,6 +34,9 @@ macro_rules! make_error {
}
use diesel::result::Error as DieselErr;
use diesel::ConnectionError as DieselConErr;
use diesel_migrations::RunMigrationsError as DieselMigErr;
use diesel::r2d2::PoolError as R2d2Err;
use handlebars::RenderError as HbErr;
use jsonwebtoken::errors::Error as JWTErr;
use regex::Error as RegexErr;
@ -48,7 +51,7 @@ use yubico::yubicoerror::YubicoError as YubiErr;
use lettre::address::AddressError as AddrErr;
use lettre::error::Error as LettreErr;
use lettre::message::mime::FromStrError as FromStrErr;
use lettre::transport::smtp::error::Error as SmtpErr;
use lettre::transport::smtp::Error as SmtpErr;
#[derive(Serialize)]
pub struct Empty {}
@ -66,6 +69,7 @@ make_error! {
// Used for special return values, like 2FA errors
JsonError(Value): _no_source, _serialize,
DbError(DieselErr): _has_source, _api_error,
R2d2Error(R2d2Err): _has_source, _api_error,
U2fError(U2fErr): _has_source, _api_error,
SerdeError(SerdeErr): _has_source, _api_error,
JWTError(JWTErr): _has_source, _api_error,
@ -77,10 +81,13 @@ make_error! {
RegexError(RegexErr): _has_source, _api_error,
YubiError(YubiErr): _has_source, _api_error,
LetreError(LettreErr): _has_source, _api_error,
LettreError(LettreErr): _has_source, _api_error,
AddressError(AddrErr): _has_source, _api_error,
SmtpError(SmtpErr): _has_source, _api_error,
FromStrError(FromStrErr): _has_source, _api_error,
DieselConError(DieselConErr): _has_source, _api_error,
DieselMigError(DieselMigErr): _has_source, _api_error,
}
impl std::fmt::Debug for Error {
@ -184,6 +191,7 @@ impl<'r> Responder<'r> for Error {
fn respond_to(self, _: &Request) -> response::Result<'r> {
match self.error {
ErrorKind::EmptyError(_) => {} // Don't print the error in this situation
ErrorKind::SimpleError(_) => {} // Don't print the error in this situation
_ => error!(target: "error", "{:#?}", self),
};
@ -203,9 +211,11 @@ impl<'r> Responder<'r> for Error {
#[macro_export]
macro_rules! err {
($msg:expr) => {{
error!("{}", $msg);
return Err(crate::error::Error::new($msg, $msg));
}};
($usr_msg:expr, $log_value:expr) => {{
error!("{}. {}", $usr_msg, $log_value);
return Err(crate::error::Error::new($usr_msg, $log_value));
}};
}

View File

@ -1,15 +1,14 @@
use std::{env, str::FromStr};
use std::{str::FromStr};
use chrono::{DateTime, Local};
use chrono_tz::Tz;
use native_tls::{Protocol, TlsConnector};
use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
use lettre::{
message::{header, Mailbox, Message, MultiPart, SinglePart},
transport::smtp::authentication::{Credentials, Mechanism as SmtpAuthMechanism},
transport::smtp::client::{Tls, TlsParameters},
transport::smtp::extension::ClientId,
Address, SmtpTransport, Tls, TlsParameters, Transport,
Address, SmtpTransport, Transport,
};
use crate::{
@ -20,53 +19,67 @@ use crate::{
};
fn mailer() -> SmtpTransport {
use std::time::Duration;
let host = CONFIG.smtp_host().unwrap();
let client_security = if CONFIG.smtp_ssl() {
let tls = TlsConnector::builder()
.min_protocol_version(Some(Protocol::Tlsv11))
.build()
.unwrap();
let smtp_client = SmtpTransport::builder_dangerous(host.as_str())
.port(CONFIG.smtp_port())
.timeout(Some(Duration::from_secs(CONFIG.smtp_timeout())));
let params = TlsParameters::new(host.clone(), tls);
// Determine security
let smtp_client = if CONFIG.smtp_ssl() {
let mut tls_parameters = TlsParameters::builder(host);
if CONFIG.smtp_accept_invalid_hostnames() {
tls_parameters.dangerous_accept_invalid_hostnames(true);
}
if CONFIG.smtp_accept_invalid_certs() {
tls_parameters.dangerous_accept_invalid_certs(true);
}
let tls_parameters = tls_parameters.build().unwrap();
if CONFIG.smtp_explicit_tls() {
Tls::Wrapper(params)
smtp_client.tls(Tls::Wrapper(tls_parameters))
} else {
Tls::Required(params)
smtp_client.tls(Tls::Required(tls_parameters))
}
} else {
Tls::None
smtp_client
};
use std::time::Duration;
let smtp_client = SmtpTransport::builder(host).port(CONFIG.smtp_port()).tls(client_security);
let smtp_client = match (CONFIG.smtp_username(), CONFIG.smtp_password()) {
(Some(user), Some(pass)) => smtp_client.credentials(Credentials::new(user, pass)),
_ => smtp_client,
};
let smtp_client = match CONFIG.helo_name() {
Some(helo_name) => smtp_client.hello_name(ClientId::new(helo_name)),
Some(helo_name) => smtp_client.hello_name(ClientId::Domain(helo_name)),
None => smtp_client,
};
let smtp_client = match CONFIG.smtp_auth_mechanism() {
Some(mechanism) => {
let correct_mechanism = format!("\"{}\"", crate::util::upcase_first(mechanism.trim_matches('"')));
let allowed_mechanisms = vec![SmtpAuthMechanism::Plain, SmtpAuthMechanism::Login, SmtpAuthMechanism::Xoauth2];
let mut selected_mechanisms = vec![];
for wanted_mechanism in mechanism.split(',') {
for m in &allowed_mechanisms {
if m.to_string().to_lowercase() == wanted_mechanism.trim_matches(|c| c == '"' || c == '\'' || c == ' ').to_lowercase() {
selected_mechanisms.push(*m);
}
}
};
// TODO: Allow more than one mechanism
match serde_json::from_str::<SmtpAuthMechanism>(&correct_mechanism) {
Ok(auth_mechanism) => smtp_client.authentication(vec![auth_mechanism]),
_ => panic!("Failure to parse mechanism. Is it proper Json? Eg. `\"Plain\"` not `Plain`"),
if !selected_mechanisms.is_empty() {
smtp_client.authentication(selected_mechanisms)
} else {
// Only show a warning, and return without setting an actual authentication mechanism
warn!("No valid SMTP Auth mechanism found for '{}', using default values", mechanism);
smtp_client
}
}
_ => smtp_client,
};
smtp_client.timeout(Some(Duration::from_secs(CONFIG.smtp_timeout()))).build()
smtp_client.build()
}
fn get_text(template_name: &'static str, data: serde_json::Value) -> Result<(String, String, String), Error> {
@ -84,30 +97,15 @@ fn get_template(template_name: &str, data: &serde_json::Value) -> Result<(String
None => err!("Template doesn't contain subject"),
};
use newline_converter::unix2dos;
let body = match text_split.next() {
Some(s) => s.trim().to_string(),
Some(s) => unix2dos(s.trim()).to_string(),
None => err!("Template doesn't contain body"),
};
Ok((subject, body))
}
pub fn format_datetime(dt: &DateTime<Local>) -> String {
let fmt = "%A, %B %_d, %Y at %r %Z";
// With a DateTime<Local>, `%Z` formats as the time zone's UTC offset
// (e.g., `+00:00`). If the `TZ` environment variable is set, try to
// format as a time zone abbreviation instead (e.g., `UTC`).
if let Ok(tz) = env::var("TZ") {
if let Ok(tz) = tz.parse::<Tz>() {
return dt.with_timezone(&tz).format(fmt).to_string();
}
}
// Otherwise, fall back to just displaying the UTC offset.
dt.format(fmt).to_string()
}
pub fn send_password_hint(address: &str, hint: Option<String>) -> EmptyResult {
let template_name = if hint.is_some() {
"email/pw_hint_some"
@ -242,13 +240,14 @@ pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>,
use crate::util::upcase_first;
let device = upcase_first(device);
let fmt = "%A, %B %_d, %Y at %r %Z";
let (subject, body_html, body_text) = get_text(
"email/new_device_logged_in",
json!({
"url": CONFIG.domain(),
"ip": ip,
"device": device,
"datetime": format_datetime(dt),
"datetime": crate::util::format_datetime_local(dt, fmt),
}),
)?;
@ -303,35 +302,49 @@ fn send_email(address: &str, subject: &str, body_html: &str, body_text: &str) ->
let address = format!("{}@{}", address_split[1], domain_puny);
let data = MultiPart::mixed()
.multipart(
MultiPart::alternative()
.singlepart(
SinglePart::quoted_printable()
.header(header::ContentType("text/plain; charset=utf-8".parse()?))
.body(body_text),
)
.multipart(
MultiPart::related().singlepart(
SinglePart::quoted_printable()
.header(header::ContentType("text/html; charset=utf-8".parse()?))
.body(body_html),
)
// .singlepart(SinglePart::base64() -- Inline files would go here
),
)
// .singlepart(SinglePart::base64() -- Attachments would go here
;
let html = SinglePart::base64()
.header(header::ContentType("text/html; charset=utf-8".parse()?))
.body(body_html);
let text = SinglePart::base64()
.header(header::ContentType("text/plain; charset=utf-8".parse()?))
.body(body_text);
// The boundary generated by Lettre it self is mostly too large based on the RFC822, so we generate one our selfs.
use uuid::Uuid;
let unique_id = Uuid::new_v4().to_simple();
let boundary = format!("_Part_{}_", unique_id);
let alternative = MultiPart::alternative().boundary(boundary).singlepart(text).singlepart(html);
let smtp_from = &CONFIG.smtp_from();
let email = Message::builder()
.message_id(Some(format!("<{}.{}>", unique_id, smtp_from)))
.to(Mailbox::new(None, Address::from_str(&address)?))
.from(Mailbox::new(
Some(CONFIG.smtp_from_name()),
Address::from_str(&CONFIG.smtp_from())?,
Address::from_str(smtp_from)?,
))
.subject(subject)
.multipart(data)?;
.multipart(alternative)?;
let _ = mailer().send(&email)?;
Ok(())
match mailer().send(&email) {
Ok(_) => Ok(()),
// Match some common errors and make them more user friendly
Err(e) => match e {
lettre::transport::smtp::Error::Client(x) => {
err!(format!("SMTP Client error: {}", x));
},
lettre::transport::smtp::Error::Transient(x) => {
err!(format!("SMTP 4xx error: {:?}", x.message));
},
lettre::transport::smtp::Error::Permanent(x) => {
err!(format!("SMTP 5xx error: {:?}", x.message));
},
lettre::transport::smtp::Error::Io(x) => {
err!(format!("SMTP IO error: {}", x));
},
// Fallback for all other errors
_ => Err(e.into())
}
}
}

View File

@ -17,7 +17,6 @@ extern crate diesel;
extern crate diesel_migrations;
use std::{
fmt, // For panic logging
fs::create_dir_all,
panic,
path::Path,
@ -26,12 +25,15 @@ use std::{
thread,
};
use structopt::StructOpt;
#[macro_use]
mod error;
mod api;
mod auth;
mod config;
mod crypto;
#[macro_use]
mod db;
mod mail;
mod util;
@ -39,18 +41,6 @@ mod util;
pub use config::CONFIG;
pub use error::{Error, MapResult};
use structopt::StructOpt;
// Used for catching panics and log them to file instead of stderr
use backtrace::Backtrace;
struct Shim(Backtrace);
impl fmt::Debug for Shim {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
write!(fmt, "\n{:?}", self.0)
}
}
#[derive(Debug, StructOpt)]
#[structopt(name = "bitwarden_rs", about = "A Bitwarden API server written in Rust")]
struct Opt {
@ -72,10 +62,8 @@ fn main() {
_ => false,
};
check_db();
check_rsa_keys();
check_web_vault();
migrations::run_migrations();
create_icon_cache_folder();
@ -125,8 +113,21 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
.level_for("launch_", log::LevelFilter::Off)
.level_for("rocket::rocket", log::LevelFilter::Off)
.level_for("rocket::fairing", log::LevelFilter::Off)
// Never show html5ever and hyper::proto logs, too noisy
.level_for("html5ever", log::LevelFilter::Off)
.level_for("hyper::proto", log::LevelFilter::Off)
.chain(std::io::stdout());
// Enable smtp debug logging only specifically for smtp when need.
// This can contain sensitive information we do not want in the default debug/trace logging.
if CONFIG.smtp_debug() {
println!("[WARNING] SMTP Debugging is enabled (SMTP_DEBUG=true). Sensitive information could be disclosed via logs!");
println!("[WARNING] Only enable SMTP_DEBUG during troubleshooting!\n");
logger = logger.level_for("lettre::transport::smtp", log::LevelFilter::Debug)
} else {
logger = logger.level_for("lettre::transport::smtp", log::LevelFilter::Off)
}
if CONFIG.extended_logging() {
logger = logger.format(|out, message, record| {
out.finish(format_args!(
@ -156,8 +157,6 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
// Catch panics and log them instead of default output to StdErr
panic::set_hook(Box::new(|info| {
let backtrace = Backtrace::new();
let thread = thread::current();
let thread = thread.name().unwrap_or("unnamed");
@ -169,23 +168,25 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
},
};
let backtrace = backtrace::Backtrace::new();
match info.location() {
Some(location) => {
error!(
target: "panic", "thread '{}' panicked at '{}': {}:{}{:?}",
target: "panic", "thread '{}' panicked at '{}': {}:{}\n{:?}",
thread,
msg,
location.file(),
location.line(),
Shim(backtrace)
backtrace
);
}
None => error!(
target: "panic",
"thread '{}' panicked at '{}'{:?}",
"thread '{}' panicked at '{}'\n{:?}",
thread,
msg,
Shim(backtrace)
backtrace
),
}
}));
@ -211,30 +212,6 @@ fn chain_syslog(logger: fern::Dispatch) -> fern::Dispatch {
}
}
fn check_db() {
if cfg!(feature = "sqlite") {
let url = CONFIG.database_url();
let path = Path::new(&url);
if let Some(parent) = path.parent() {
if create_dir_all(parent).is_err() {
error!("Error creating database directory");
exit(1);
}
}
// Turn on WAL in SQLite
if CONFIG.enable_db_wal() {
use diesel::RunQueryDsl;
let connection = db::get_connection().expect("Can't connect to DB");
diesel::sql_query("PRAGMA journal_mode=wal")
.execute(&connection)
.expect("Failed to turn on WAL");
}
}
db::get_connection().expect("Can't connect to DB");
}
fn create_icon_cache_folder() {
// Try to create the icon cache folder, and generate an error if it could not.
create_dir_all(&CONFIG.icon_cache_folder()).expect("Error creating icon cache directory");
@ -296,46 +273,22 @@ fn check_web_vault() {
let index_path = Path::new(&CONFIG.web_vault_folder()).join("index.html");
if !index_path.exists() {
error!("Web vault is not found. To install it, please follow the steps in: ");
error!("Web vault is not found at '{}'. To install it, please follow the steps in: ", CONFIG.web_vault_folder());
error!("https://github.com/dani-garcia/bitwarden_rs/wiki/Building-binary#install-the-web-vault");
error!("You can also set the environment variable 'WEB_VAULT_ENABLED=false' to disable it");
exit(1);
}
}
// Embed the migrations from the migrations folder into the application
// This way, the program automatically migrates the database to the latest version
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
#[allow(unused_imports)]
mod migrations {
#[cfg(feature = "sqlite")]
embed_migrations!("migrations/sqlite");
#[cfg(feature = "mysql")]
embed_migrations!("migrations/mysql");
#[cfg(feature = "postgresql")]
embed_migrations!("migrations/postgresql");
pub fn run_migrations() {
// Make sure the database is up to date (create if it doesn't exist, or run the migrations)
let connection = crate::db::get_connection().expect("Can't connect to DB");
use std::io::stdout;
// Disable Foreign Key Checks during migration
use diesel::RunQueryDsl;
#[cfg(feature = "postgres")]
diesel::sql_query("SET CONSTRAINTS ALL DEFERRED").execute(&connection).expect("Failed to disable Foreign Key Checks during migrations");
#[cfg(feature = "mysql")]
diesel::sql_query("SET FOREIGN_KEY_CHECKS = 0").execute(&connection).expect("Failed to disable Foreign Key Checks during migrations");
#[cfg(feature = "sqlite")]
diesel::sql_query("PRAGMA defer_foreign_keys = ON").execute(&connection).expect("Failed to disable Foreign Key Checks during migrations");
embedded_migrations::run_with_output(&connection, &mut stdout()).expect("Can't run migrations");
}
}
fn launch_rocket(extra_debug: bool) {
let pool = match util::retry_db(db::DbPool::from_config, CONFIG.db_connection_retries()) {
Ok(p) => p,
Err(e) => {
error!("Error creating database pool: {:?}", e);
exit(1);
}
};
let basepath = &CONFIG.domain_path();
// If adding more paths here, consider also adding them to
@ -347,7 +300,7 @@ fn launch_rocket(extra_debug: bool) {
.mount(&[basepath, "/identity"].concat(), api::identity_routes())
.mount(&[basepath, "/icons"].concat(), api::icons_routes())
.mount(&[basepath, "/notifications"].concat(), api::notifications_routes())
.manage(db::init_pool())
.manage(pool)
.manage(api::start_notification_server())
.attach(util::AppHeaders())
.attach(util::CORS())

Binary file not shown.

Before

Width:  |  Height:  |  Size: 344 B

View File

@ -39,8 +39,7 @@
"Type": 1,
"Domains": [
"apple.com",
"icloud.com",
"tv.apple.com"
"icloud.com"
],
"Excluded": false
},
@ -106,6 +105,7 @@
"passport.net",
"windows.com",
"microsoftonline.com",
"office.com",
"office365.com",
"microsoftstore.com",
"xbox.com",
@ -185,15 +185,21 @@
"Type": 18,
"Domains": [
"amazon.com",
"amazon.co.uk",
"amazon.ae",
"amazon.ca",
"amazon.de",
"amazon.fr",
"amazon.es",
"amazon.it",
"amazon.co.uk",
"amazon.com.au",
"amazon.co.nz",
"amazon.in"
"amazon.com.br",
"amazon.com.mx",
"amazon.com.tr",
"amazon.de",
"amazon.es",
"amazon.fr",
"amazon.in",
"amazon.it",
"amazon.nl",
"amazon.sa",
"amazon.sg"
],
"Excluded": false
},
@ -386,8 +392,7 @@
"alibaba.com",
"aliexpress.com",
"aliyun.com",
"net.cn",
"www.net.cn"
"net.cn"
],
"Excluded": false
},
@ -583,11 +588,28 @@
"Type": 64,
"Domains": [
"ebay.com",
"ebay.de",
"ebay.at",
"ebay.be",
"ebay.ca",
"ebay.in",
"ebay.ch",
"ebay.cn",
"ebay.co.jp",
"ebay.co.th",
"ebay.co.uk",
"ebay.com.au"
"ebay.com.au",
"ebay.com.hk",
"ebay.com.my",
"ebay.com.sg",
"ebay.com.tw",
"ebay.de",
"ebay.es",
"ebay.fr",
"ebay.ie",
"ebay.in",
"ebay.it",
"ebay.nl",
"ebay.ph",
"ebay.pl"
],
"Excluded": false
},
@ -717,41 +739,27 @@
"eventbrite.ca",
"eventbrite.ch",
"eventbrite.cl",
"eventbrite.co.id",
"eventbrite.co.in",
"eventbrite.co.kr",
"eventbrite.co",
"eventbrite.co.nz",
"eventbrite.co.uk",
"eventbrite.co.ve",
"eventbrite.com",
"eventbrite.com.ar",
"eventbrite.com.au",
"eventbrite.com.bo",
"eventbrite.com.br",
"eventbrite.com.co",
"eventbrite.com.hk",
"eventbrite.com.hn",
"eventbrite.com.mx",
"eventbrite.com.pe",
"eventbrite.com.sg",
"eventbrite.com.tr",
"eventbrite.com.tw",
"eventbrite.cz",
"eventbrite.de",
"eventbrite.dk",
"eventbrite.es",
"eventbrite.fi",
"eventbrite.fr",
"eventbrite.gy",
"eventbrite.hu",
"eventbrite.hk",
"eventbrite.ie",
"eventbrite.is",
"eventbrite.it",
"eventbrite.jp",
"eventbrite.mx",
"eventbrite.nl",
"eventbrite.no",
"eventbrite.pl",
"eventbrite.pt",
"eventbrite.ru",
"eventbrite.se"
"eventbrite.se",
"eventbrite.sg"
],
"Excluded": false
},
@ -769,15 +777,6 @@
},
{
"Type": 75,
"Domains": [
"netcup.de",
"netcup.eu",
"customercontrolpanel.de"
],
"Excluded": false
},
{
"Type": 76,
"Domains": [
"docusign.com",
"docusign.net"
@ -785,7 +784,7 @@
"Excluded": false
},
{
"Type": 77,
"Type": 76,
"Domains": [
"envato.com",
"themeforest.net",
@ -799,7 +798,7 @@
"Excluded": false
},
{
"Type": 78,
"Type": 77,
"Domains": [
"x10hosting.com",
"x10premium.com"
@ -807,7 +806,7 @@
"Excluded": false
},
{
"Type": 79,
"Type": 78,
"Domains": [
"dnsomatic.com",
"opendns.com",
@ -816,7 +815,7 @@
"Excluded": false
},
{
"Type": 80,
"Type": 79,
"Domains": [
"cagreatamerica.com",
"canadaswonderland.com",
@ -835,11 +834,56 @@
"Excluded": false
},
{
"Type": 81,
"Type": 80,
"Domains": [
"ubnt.com",
"ui.com"
],
"Excluded": false
},
{
"Type": 81,
"Domains": [
"discordapp.com",
"discord.com"
],
"Excluded": false
},
{
"Type": 82,
"Domains": [
"netcup.de",
"netcup.eu",
"customercontrolpanel.de"
],
"Excluded": false
},
{
"Type": 83,
"Domains": [
"yandex.com",
"ya.ru",
"yandex.az",
"yandex.by",
"yandex.co.il",
"yandex.com.am",
"yandex.com.ge",
"yandex.com.tr",
"yandex.ee",
"yandex.fi",
"yandex.fr",
"yandex.kg",
"yandex.kz",
"yandex.lt",
"yandex.lv",
"yandex.md",
"yandex.pl",
"yandex.ru",
"yandex.tj",
"yandex.tm",
"yandex.ua",
"yandex.uz"
],
"Excluded": false
}
]
]

View File

@ -1,10 +1,10 @@
/*!
* Bootstrap v4.5.0 (https://getbootstrap.com/)
* Bootstrap v4.5.2 (https://getbootstrap.com/)
* Copyright 2011-2020 The Bootstrap Authors
* Copyright 2011-2020 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
:root {
:root {
--blue: #007bff;
--indigo: #6610f2;
--purple: #6f42c1;
@ -163,12 +163,12 @@ a:hover {
text-decoration: underline;
}
a:not([href]) {
a:not([href]):not([class]) {
color: inherit;
text-decoration: none;
}
a:not([href]):hover {
a:not([href]):not([class]):hover {
color: inherit;
text-decoration: none;
}
@ -539,39 +539,12 @@ pre code {
overflow-y: scroll;
}
.container {
width: 100%;
padding-right: 15px;
padding-left: 15px;
margin-right: auto;
margin-left: auto;
}
@media (min-width: 576px) {
.container {
max-width: 540px;
}
}
@media (min-width: 768px) {
.container {
max-width: 720px;
}
}
@media (min-width: 992px) {
.container {
max-width: 960px;
}
}
@media (min-width: 1200px) {
.container {
max-width: 1140px;
}
}
.container-fluid, .container-sm, .container-md, .container-lg, .container-xl {
.container,
.container-fluid,
.container-sm,
.container-md,
.container-lg,
.container-xl {
width: 100%;
padding-right: 15px;
padding-left: 15px;
@ -640,7 +613,6 @@ pre code {
flex-basis: 0;
-ms-flex-positive: 1;
flex-grow: 1;
min-width: 0;
max-width: 100%;
}
@ -884,7 +856,6 @@ pre code {
flex-basis: 0;
-ms-flex-positive: 1;
flex-grow: 1;
min-width: 0;
max-width: 100%;
}
.row-cols-sm-1 > * {
@ -1087,7 +1058,6 @@ pre code {
flex-basis: 0;
-ms-flex-positive: 1;
flex-grow: 1;
min-width: 0;
max-width: 100%;
}
.row-cols-md-1 > * {
@ -1290,7 +1260,6 @@ pre code {
flex-basis: 0;
-ms-flex-positive: 1;
flex-grow: 1;
min-width: 0;
max-width: 100%;
}
.row-cols-lg-1 > * {
@ -1493,7 +1462,6 @@ pre code {
flex-basis: 0;
-ms-flex-positive: 1;
flex-grow: 1;
min-width: 0;
max-width: 100%;
}
.row-cols-xl-1 > * {
@ -2259,6 +2227,7 @@ textarea.form-control {
.valid-tooltip {
position: absolute;
top: 100%;
left: 0;
z-index: 5;
display: none;
max-width: 100%;
@ -2359,6 +2328,7 @@ textarea.form-control {
.invalid-tooltip {
position: absolute;
top: 100%;
left: 0;
z-index: 5;
display: none;
max-width: 100%;
@ -3776,6 +3746,7 @@ input[type="button"].btn-block {
.custom-control {
position: relative;
z-index: 1;
display: block;
min-height: 1.5rem;
padding-left: 1.5rem;
@ -4312,12 +4283,14 @@ input[type="button"].btn-block {
background-color: #007bff;
}
.nav-fill > .nav-link,
.nav-fill .nav-item {
-ms-flex: 1 1 auto;
flex: 1 1 auto;
text-align: center;
}
.nav-justified > .nav-link,
.nav-justified .nav-item {
-ms-flex-preferred-size: 0;
flex-basis: 0;
@ -4775,6 +4748,11 @@ input[type="button"].btn-block {
border-bottom-left-radius: calc(0.25rem - 1px);
}
.card > .card-header + .list-group,
.card > .list-group + .card-footer {
border-top: 0;
}
.card-body {
-ms-flex: 1 1 auto;
flex: 1 1 auto;
@ -4814,10 +4792,6 @@ input[type="button"].btn-block {
border-radius: calc(0.25rem - 1px) calc(0.25rem - 1px) 0 0;
}
.card-header + .list-group .list-group-item:first-child {
border-top: 0;
}
.card-footer {
padding: 0.75rem 1.25rem;
background-color: rgba(0, 0, 0, 0.03);
@ -4847,6 +4821,7 @@ input[type="button"].btn-block {
bottom: 0;
left: 0;
padding: 1.25rem;
border-radius: calc(0.25rem - 1px);
}
.card-img,
@ -4958,6 +4933,10 @@ input[type="button"].btn-block {
}
}
.accordion {
overflow-anchor: none;
}
.accordion > .card {
overflow: hidden;
}
@ -5876,15 +5855,14 @@ a.close.disabled {
}
.toast {
-ms-flex-preferred-size: 350px;
flex-basis: 350px;
max-width: 350px;
overflow: hidden;
font-size: 0.875rem;
background-color: rgba(255, 255, 255, 0.85);
background-clip: padding-box;
border: 1px solid rgba(0, 0, 0, 0.1);
box-shadow: 0 0.25rem 0.75rem rgba(0, 0, 0, 0.1);
-webkit-backdrop-filter: blur(10px);
backdrop-filter: blur(10px);
opacity: 0;
border-radius: 0.25rem;
}
@ -5916,6 +5894,8 @@ a.close.disabled {
background-color: rgba(255, 255, 255, 0.85);
background-clip: padding-box;
border-bottom: 1px solid rgba(0, 0, 0, 0.05);
border-top-left-radius: calc(0.25rem - 1px);
border-top-right-radius: calc(0.25rem - 1px);
}
.toast-body {
@ -10182,7 +10162,8 @@ a.text-dark:hover, a.text-dark:focus {
}
.text-break {
word-wrap: break-word !important;
word-break: break-word !important;
overflow-wrap: break-word !important;
}
.text-reset {
@ -10275,4 +10256,3 @@ a.text-dark:hover, a.text-dark:focus {
border-color: #dee2e6;
}
}
/*# sourceMappingURL=bootstrap.css.map */

View File

@ -0,0 +1,223 @@
/*
* This combined file was created by the DataTables downloader builder:
* https://datatables.net/download
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs4/dt-1.10.22
*
* Included libraries:
* DataTables 1.10.22
*/
table.dataTable {
clear: both;
margin-top: 6px !important;
margin-bottom: 6px !important;
max-width: none !important;
border-collapse: separate !important;
border-spacing: 0;
}
table.dataTable td,
table.dataTable th {
-webkit-box-sizing: content-box;
box-sizing: content-box;
}
table.dataTable td.dataTables_empty,
table.dataTable th.dataTables_empty {
text-align: center;
}
table.dataTable.nowrap th,
table.dataTable.nowrap td {
white-space: nowrap;
}
div.dataTables_wrapper div.dataTables_length label {
font-weight: normal;
text-align: left;
white-space: nowrap;
}
div.dataTables_wrapper div.dataTables_length select {
width: auto;
display: inline-block;
}
div.dataTables_wrapper div.dataTables_filter {
text-align: right;
}
div.dataTables_wrapper div.dataTables_filter label {
font-weight: normal;
white-space: nowrap;
text-align: left;
}
div.dataTables_wrapper div.dataTables_filter input {
margin-left: 0.5em;
display: inline-block;
width: auto;
}
div.dataTables_wrapper div.dataTables_info {
padding-top: 0.85em;
}
div.dataTables_wrapper div.dataTables_paginate {
margin: 0;
white-space: nowrap;
text-align: right;
}
div.dataTables_wrapper div.dataTables_paginate ul.pagination {
margin: 2px 0;
white-space: nowrap;
justify-content: flex-end;
}
div.dataTables_wrapper div.dataTables_processing {
position: absolute;
top: 50%;
left: 50%;
width: 200px;
margin-left: -100px;
margin-top: -26px;
text-align: center;
padding: 1em 0;
}
table.dataTable > thead > tr > th:active,
table.dataTable > thead > tr > td:active {
outline: none;
}
table.dataTable > thead > tr > th:not(.sorting_disabled),
table.dataTable > thead > tr > td:not(.sorting_disabled) {
padding-right: 30px;
}
table.dataTable > thead .sorting,
table.dataTable > thead .sorting_asc,
table.dataTable > thead .sorting_desc,
table.dataTable > thead .sorting_asc_disabled,
table.dataTable > thead .sorting_desc_disabled {
cursor: pointer;
position: relative;
}
table.dataTable > thead .sorting:before, table.dataTable > thead .sorting:after,
table.dataTable > thead .sorting_asc:before,
table.dataTable > thead .sorting_asc:after,
table.dataTable > thead .sorting_desc:before,
table.dataTable > thead .sorting_desc:after,
table.dataTable > thead .sorting_asc_disabled:before,
table.dataTable > thead .sorting_asc_disabled:after,
table.dataTable > thead .sorting_desc_disabled:before,
table.dataTable > thead .sorting_desc_disabled:after {
position: absolute;
bottom: 0.9em;
display: block;
opacity: 0.3;
}
table.dataTable > thead .sorting:before,
table.dataTable > thead .sorting_asc:before,
table.dataTable > thead .sorting_desc:before,
table.dataTable > thead .sorting_asc_disabled:before,
table.dataTable > thead .sorting_desc_disabled:before {
right: 1em;
content: "\2191";
}
table.dataTable > thead .sorting:after,
table.dataTable > thead .sorting_asc:after,
table.dataTable > thead .sorting_desc:after,
table.dataTable > thead .sorting_asc_disabled:after,
table.dataTable > thead .sorting_desc_disabled:after {
right: 0.5em;
content: "\2193";
}
table.dataTable > thead .sorting_asc:before,
table.dataTable > thead .sorting_desc:after {
opacity: 1;
}
table.dataTable > thead .sorting_asc_disabled:before,
table.dataTable > thead .sorting_desc_disabled:after {
opacity: 0;
}
div.dataTables_scrollHead table.dataTable {
margin-bottom: 0 !important;
}
div.dataTables_scrollBody table {
border-top: none;
margin-top: 0 !important;
margin-bottom: 0 !important;
}
div.dataTables_scrollBody table thead .sorting:before,
div.dataTables_scrollBody table thead .sorting_asc:before,
div.dataTables_scrollBody table thead .sorting_desc:before,
div.dataTables_scrollBody table thead .sorting:after,
div.dataTables_scrollBody table thead .sorting_asc:after,
div.dataTables_scrollBody table thead .sorting_desc:after {
display: none;
}
div.dataTables_scrollBody table tbody tr:first-child th,
div.dataTables_scrollBody table tbody tr:first-child td {
border-top: none;
}
div.dataTables_scrollFoot > .dataTables_scrollFootInner {
box-sizing: content-box;
}
div.dataTables_scrollFoot > .dataTables_scrollFootInner > table {
margin-top: 0 !important;
border-top: none;
}
@media screen and (max-width: 767px) {
div.dataTables_wrapper div.dataTables_length,
div.dataTables_wrapper div.dataTables_filter,
div.dataTables_wrapper div.dataTables_info,
div.dataTables_wrapper div.dataTables_paginate {
text-align: center;
}
div.dataTables_wrapper div.dataTables_paginate ul.pagination {
justify-content: center !important;
}
}
table.dataTable.table-sm > thead > tr > th:not(.sorting_disabled) {
padding-right: 20px;
}
table.dataTable.table-sm .sorting:before,
table.dataTable.table-sm .sorting_asc:before,
table.dataTable.table-sm .sorting_desc:before {
top: 5px;
right: 0.85em;
}
table.dataTable.table-sm .sorting:after,
table.dataTable.table-sm .sorting_asc:after,
table.dataTable.table-sm .sorting_desc:after {
top: 5px;
}
table.table-bordered.dataTable {
border-right-width: 0;
}
table.table-bordered.dataTable th,
table.table-bordered.dataTable td {
border-left-width: 0;
}
table.table-bordered.dataTable th:last-child, table.table-bordered.dataTable th:last-child,
table.table-bordered.dataTable td:last-child,
table.table-bordered.dataTable td:last-child {
border-right-width: 1px;
}
table.table-bordered.dataTable tbody th,
table.table-bordered.dataTable tbody td {
border-bottom-width: 0;
}
div.dataTables_scrollHead table.table-bordered {
border-bottom-width: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row {
margin: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row > div[class^="col-"]:first-child {
padding-left: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row > div[class^="col-"]:last-child {
padding-right: 0;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -122,6 +122,6 @@
})();
</script>
<!-- This script needs to be at the bottom, else it will fail! -->
<script src="{{urlpath}}/bwrs_static/bootstrap-native-v4.js"></script>
<script src="{{urlpath}}/bwrs_static/bootstrap-native.js"></script>
</body>
</html>

View File

@ -72,7 +72,7 @@
const hour = String(d.getUTCHours()).padStart(2, '0');
const minute = String(d.getUTCMinutes()).padStart(2, '0');
const seconds = String(d.getUTCSeconds()).padStart(2, '0');
const browserUTC = year + '-' + month + '-' + day + ' ' + hour + ':' + minute + ':' + seconds;
const browserUTC = `${year}-${month}-${day} ${hour}:${minute}:${seconds} UTC`;
document.getElementById("time-browser-string").innerText = browserUTC;
const serverUTC = document.getElementById("time-server-string").innerText;
@ -147,4 +147,4 @@
}
}
})();
</script>
</script>

View File

@ -1,12 +1,12 @@
<main class="container">
<div id="organizations-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-0">Organizations</h6>
<h6 class="border-bottom pb-2 mb-3">Organizations</h6>
<div class="table-responsive-xl small">
<table class="table table-sm table-striped table-hover">
<table id="orgs-table" class="table table-sm table-striped table-hover">
<thead>
<tr>
<th style="width: 24px;" colspan="2">Organization</th>
<th>Organization</th>
<th>Users</th>
<th>Items</th>
<th>Attachments</th>
@ -15,13 +15,15 @@
<tbody>
{{#each organizations}}
<tr>
<td><img class="rounded identicon" data-src="{{Id}}"></td>
<td>
<strong>{{Name}}</strong>
<span class="mr-2">({{BillingEmail}})</span>
<span class="d-block">
<span class="badge badge-success">{{Id}}</span>
</span>
<img class="mr-2 float-left rounded identicon" data-src="{{Id}}">
<div class="float-left">
<strong>{{Name}}</strong>
<span class="mr-2">({{BillingEmail}})</span>
<span class="d-block">
<span class="badge badge-success">{{Id}}</span>
</span>
</div>
</td>
<td>
<span class="d-block">{{user_count}}</span>
@ -40,12 +42,23 @@
</tbody>
</table>
</div>
</div>
</main>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/datatables.css" />
<script src="{{urlpath}}/bwrs_static/jquery-3.5.1.slim.js"></script>
<script src="{{urlpath}}/bwrs_static/datatables.js"></script>
<script>
document.querySelectorAll("img.identicon").forEach(function (e, i) {
e.src = identicon(e.dataset.src);
});
document.addEventListener("DOMContentLoaded", function(event) {
$('#orgs-table').DataTable({
"responsive": true,
"lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ],
"pageLength": -1, // Default show all
});
});
</script>

View File

@ -17,7 +17,7 @@
<div id="g_{{group}}" class="card-body collapse" data-parent="#config-form">
{{#each elements}}
{{#if editable}}
<div class="form-group row" title="[{{name}}] {{doc.description}}">
<div class="form-group row align-items-center" title="[{{name}}] {{doc.description}}">
{{#case type "text" "number" "password"}}
<label for="input_{{name}}" class="col-sm-3 col-form-label">{{doc.name}}</label>
<div class="col-sm-8 input-group">
@ -34,7 +34,7 @@
</div>
{{/case}}
{{#case type "checkbox"}}
<div class="col-sm-3">{{doc.name}}</div>
<div class="col-sm-3 col-form-label">{{doc.name}}</div>
<div class="col-sm-8">
<div class="form-check">
<input class="form-check-input conf-{{type}}" type="checkbox" id="input_{{name}}"
@ -48,7 +48,7 @@
{{/if}}
{{/each}}
{{#case group "smtp"}}
<div class="form-group row pt-3 border-top" title="Send a test email to given email address">
<div class="form-group row align-items-center pt-3 border-top" title="Send a test email to given email address">
<label for="smtp-test-email" class="col-sm-3 col-form-label">Test SMTP</label>
<div class="col-sm-8 input-group">
<input class="form-control" id="smtp-test-email" type="email" placeholder="Enter test email">
@ -76,7 +76,7 @@
{{#each config}}
{{#each elements}}
{{#unless editable}}
<div class="form-group row" title="[{{name}}] {{doc.description}}">
<div class="form-group row align-items-center" title="[{{name}}] {{doc.description}}">
{{#case type "text" "number" "password"}}
<label for="input_{{name}}" class="col-sm-3 col-form-label">{{doc.name}}</label>
<div class="col-sm-8 input-group">
@ -92,9 +92,9 @@
</div>
{{/case}}
{{#case type "checkbox"}}
<div class="col-sm-3">{{doc.name}}</div>
<div class="col-sm-3 col-form-label">{{doc.name}}</div>
<div class="col-sm-8">
<div class="form-check">
<div class="form-check align-middle">
<input disabled class="form-check-input" type="checkbox" id="input_{{name}}"
{{#if value}} checked {{/if}}>
@ -139,6 +139,10 @@
<script>
function smtpTest() {
if (formHasChanges(config_form)) {
alert("Config has been changed but not yet saved.\nPlease save the changes first before sending a test email.");
return false;
}
test_email = document.getElementById("smtp-test-email");
data = JSON.stringify({ "email": test_email.value });
_post("{{urlpath}}/admin/test/smtp/",
@ -205,4 +209,35 @@
// {{#each config}} {{#if grouptoggle}}
masterCheck("input_{{grouptoggle}}", "#g_{{group}} input");
// {{/if}} {{/each}}
// Two functions to help check if there were changes to the form fields
// Useful for example during the smtp test to prevent people from clicking save before testing there new settings
function initChangeDetection(form) {
const ignore_fields = ["smtp-test-email"];
Array.from(form).forEach((el) => {
if (! ignore_fields.includes(el.id)) {
el.dataset.origValue = el.value
}
});
}
function formHasChanges(form) {
return Array.from(form).some(el => 'origValue' in el.dataset && ( el.dataset.origValue !== el.value));
}
// Trigger Form Change Detection
const config_form = document.getElementById('config-form');
initChangeDetection(config_form);
// Colorize some settings which are high risk
const risk_items = document.getElementsByClassName('col-form-label');
function colorRiskSettings(risk_el) {
Array.from(risk_el).forEach((el) => {
if (el.innerText.toLowerCase().includes('risks') ) {
el.parentElement.className += ' alert-danger'
console.log(el)
}
});
}
colorRiskSettings(risk_items);
</script>

View File

@ -1,37 +1,43 @@
<main class="container">
<div id="users-block" class="my-3 p-3 bg-white rounded shadow">
<h6 class="border-bottom pb-2 mb-0">Registered Users</h6>
<h6 class="border-bottom pb-2 mb-3">Registered Users</h6>
<div class="table-responsive-xl small">
<table class="table table-sm table-striped table-hover">
<table id="users-table" class="table table-sm table-striped table-hover">
<thead>
<tr>
<th style="width: 24px;">User</th>
<th></th>
<th>User</th>
<th style="width:60px; min-width: 60px;">Items</th>
<th>Attachments</th>
<th style="min-width: 140px;">Organizations</th>
<th style="min-width: 120px;">Organizations</th>
<th style="width: 140px; min-width: 140px;">Actions</th>
</tr>
</thead>
<tbody>
{{#each users}}
<tr>
<td><img class="mr-2 rounded identicon" data-src="{{Email}}"></td>
<td>
<strong>{{Name}}</strong>
<span class="d-block">{{Email}}</span>
<span class="d-block">
{{#if TwoFactorEnabled}}
<span class="badge badge-success mr-2" title="2FA is enabled">2FA</span>
{{/if}}
{{#case _Status 1}}
<span class="badge badge-warning mr-2" title="User is invited">Invited</span>
{{/case}}
{{#if EmailVerified}}
<span class="badge badge-success mr-2" title="Email has been verified">Verified</span>
{{/if}}
</span>
<img class="float-left mr-2 rounded identicon" data-src="{{Email}}">
<div class="float-left">
<strong>{{Name}}</strong>
<span class="d-block">{{Email}}</span>
<span class="d-block">Created at: {{created_at}}</span>
<span class="d-block">Last active: {{last_active}}</span>
<span class="d-block">
{{#unless user_enabled}}
<span class="badge badge-danger mr-2" title="User is disabled">Disabled</span>
{{/unless}}
{{#if TwoFactorEnabled}}
<span class="badge badge-success mr-2" title="2FA is enabled">2FA</span>
{{/if}}
{{#case _Status 1}}
<span class="badge badge-warning mr-2" title="User is invited">Invited</span>
{{/case}}
{{#if EmailVerified}}
<span class="badge badge-success mr-2" title="Email has been verified">Verified</span>
{{/if}}
</span>
</div>
</td>
<td>
<span class="d-block">{{cipher_count}}</span>
@ -53,6 +59,11 @@
{{/if}}
<a class="d-block" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a>
<a class="d-block" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a>
{{#if user_enabled}}
<a class="d-block" href="#" onclick='disableUser({{jsesc Id}}, {{jsesc Email}})'>Disable User</a>
{{else}}
<a class="d-block" href="#" onclick='enableUser({{jsesc Id}}, {{jsesc Email}})'>Enable User</a>
{{/if}}
</td>
</tr>
{{/each}}
@ -83,6 +94,9 @@
</div>
</main>
<link rel="stylesheet" href="{{urlpath}}/bwrs_static/datatables.css" />
<script src="{{urlpath}}/bwrs_static/jquery-3.5.1.slim.js"></script>
<script src="{{urlpath}}/bwrs_static/datatables.js"></script>
<script>
function deleteUser(id, mail) {
var input_mail = prompt("To delete user '" + mail + "', please type the email below")
@ -109,6 +123,24 @@
"Error deauthorizing sessions");
return false;
}
function disableUser(id, mail) {
var confirmed = confirm("Are you sure you want to disable user '" + mail + "'? This will also deauthorize their sessions.")
if (confirmed) {
_post("{{urlpath}}/admin/users/" + id + "/disable",
"User disabled successfully",
"Error disabling user");
}
return false;
}
function enableUser(id, mail) {
var confirmed = confirm("Are you sure you want to enable user '" + mail + "'?")
if (confirmed) {
_post("{{urlpath}}/admin/users/" + id + "/enable",
"User enabled successfully",
"Error enabling user");
}
return false;
}
function updateRevisions() {
_post("{{urlpath}}/admin/users/update_revision",
"Success, clients will sync next time they connect",
@ -140,4 +172,19 @@
e.style.backgroundColor = orgtype.color;
e.title = orgtype.name;
});
document.addEventListener("DOMContentLoaded", function(event) {
$('#users-table').DataTable({
"responsive": true,
"lengthMenu": [ [-1, 5, 10, 25, 50], ["All", 5, 10, 25, 50] ],
"pageLength": -1, // Default show all
"columns": [
null, // Userdata
null, // Items
null, // Attachments
null, // Organizations
{ "searchable": false, "orderable": false }, // Actions
],
});
});
</script>

View File

@ -1,6 +1,8 @@
Your Email Change
<!---------------->
<html>
<p>To finalize changing your email address enter the following code in web vault: <b>{{token}}</b></p>
<p>If you did not try to change an email address, you can safely ignore this email.</p>
</html>
To finalize changing your email address enter the following code in web vault: {{token}}
If you did not try to change an email address, you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,12 +1,10 @@
Delete Your Account
<!---------------->
<html>
<p>
click the link below to delete your account.
<br>
<br>
<a href="{{url}}/#/verify-recover-delete?userId={{user_id}}&token={{token}}&email={{email}}">
Delete Your Account</a>
</p>
<p>If you did not request this email to delete your account, you can safely ignore this email.</p>
</html>
Click the link below to delete your account.
Delete Your Account: {{url}}/#/verify-recover-delete?userId={{user_id}}&token={{token}}&email={{email}}
If you did not request this email to delete your account, you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,8 +1,7 @@
Invitation to {{{org_name}}} accepted
<!---------------->
<html>
<p>
Your invitation for <b>{{email}}</b> to join <b>{{org_name}}</b> was accepted.
Please <a href="{{url}}/">log in</a> to the bitwarden_rs server and confirm them from the organization management page.
</p>
</html>
Your invitation for *{{email}}* to join *{{org_name}}* was accepted.
Please log in via {{url}} to the bitwarden_rs server and confirm them from the organization management page.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,8 +1,7 @@
Invitation to {{{org_name}}} confirmed
<!---------------->
<html>
<p>
Your invitation to join <b>{{org_name}}</b> was confirmed.
It will now appear under the Organizations the next time you <a href="{{url}}/">log in</a> to the web vault.
</p>
</html>
Your invitation to join *{{org_name}}* was confirmed.
It will now appear under the Organizations the next time you log in to the web vault at {{url}}.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,14 +1,12 @@
New Device Logged In From {{{device}}}
<!---------------->
<html>
<p>
Your account was just logged into from a new device.
Your account was just logged into from a new device.
Date: {{datetime}}
IP Address: {{ip}}
Device Type: {{device}}
* Date: {{datetime}}
* IP Address: {{ip}}
* Device Type: {{device}}
You can deauthorize all devices that have access to your account from the
<a href="{{url}}/">web vault</a> under Settings > My Account > Deauthorize Sessions.
</p>
</html>
You can deauthorize all devices that have access to your account from the web vault ( {{url}} ) under Settings > My Account > Deauthorize Sessions.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -2,6 +2,9 @@ Your master password hint
<!---------------->
You (or someone) recently requested your master password hint. Unfortunately, your account does not have a master password hint.
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to <a href="{{url}}/#/recover-delete">delete the account</a> so that you can register again and start over. All data associated with your account will be deleted.
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to delete the account ( {{url}}/#/recover-delete ) so that you can register again and start over. All data associated with your account will be deleted.
If you did not request your master password hint you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -2,9 +2,12 @@ Your master password hint
<!---------------->
You (or someone) recently requested your master password hint.
Your hint is: "{{hint}}"
Log in: <a href="{{url}}/">Web Vault</a>
Your hint is: *{{hint}}*
Log in to the web vault: {{url}}
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to <a href="{{url}}/#/recover-delete">delete the account</a> so that you can register again and start over. All data associated with your account will be deleted.
If you cannot remember your master password, there is no way to recover your data. The only option to gain access to your account again is to delete the account ( {{url}}/#/recover-delete ) so that you can register again and start over. All data associated with your account will be deleted.
If you did not request your master password hint you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,12 +1,12 @@
Join {{{org_name}}}
<!---------------->
<html>
<p>
You have been invited to join the <b>{{org_name}}</b> organization.
<br>
<br>
<a href="{{url}}/#/accept-organization/?organizationId={{org_id}}&organizationUserId={{org_user_id}}&email={{email}}&organizationName={{org_name}}&token={{token}}">
Click here to join</a>
</p>
<p>If you do not wish to join this organization, you can safely ignore this email.</p>
</html>
You have been invited to join the *{{org_name}}* organization.
Click here to join: {{url}}/#/accept-organization/?organizationId={{org_id}}&organizationUserId={{org_user_id}}&email={{email}}&organizationName={{org_name}}&token={{token}}
If you do not wish to join this organization, you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,8 +1,8 @@
Bitwarden_rs SMTP Test
<!---------------->
<html>
<p>
This is a test email to verify the SMTP configuration for <a href="{{url}}">{{url}}</a>.
</p>
<p>When you can read this email it is probably configured correctly.</p>
</html>
This is a test email to verify the SMTP configuration for {{url}}.
When you can read this email it is probably configured correctly.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,9 +1,8 @@
Your Two-step Login Verification Code
<!---------------->
<html>
<p>
Your two-step verification code is: <b>{{token}}</b>
Your two-step verification code is: {{token}}
Use this code to complete logging in with Bitwarden.
</p>
</html>
Use this code to complete logging in with Bitwarden.
===
Github: https://github.com/dani-garcia/bitwarden_rs

View File

@ -1,12 +1,10 @@
Verify Your Email
<!---------------->
<html>
<p>
Verify this email address for your account by clicking the link below.
<br>
<br>
<a href="{{url}}/#/verify-email/?userId={{user_id}}&token={{token}}">
Verify Email Address Now</a>
</p>
<p>If you did not request to verify your account, you can safely ignore this email.</p>
</html>
Verify Email Address Now: {{url}}/#/verify-email/?userId={{user_id}}&token={{token}}
If you did not request to verify your account, you can safely ignore this email.
===
Github: https://github.com/dani-garcia/bitwarden_rs

Some files were not shown because too many files have changed in this diff Show More