Compare commits

..

139 Commits
main ... dev

Author SHA1 Message Date
d6f05e73a7 Update README.md 2025-05-14 08:50:43 +00:00
chu23465
6f35e17fdd Small fixes 2025-05-12 14:47:54 +05:30
chu23465
3910a39571 MA + NF changes 2025-05-12 14:41:21 +05:30
chu23465
7766581007
Update README.md 2025-05-02 08:34:29 +05:30
chu23465
4a7b1b7377 Small change 2025-04-30 20:59:30 +05:30
chu23465
7f3b4e396c Update amazon.py 2025-04-30 20:27:22 +05:30
chu23465
d3e438b71d Update README 2025-04-30 20:21:07 +05:30
chu23465
f74fbdb82d Update netflix.yml 2025-04-30 20:01:20 +05:30
chu23465
7745c02e00 Added iTunes service 2025-04-30 19:50:29 +05:30
chu23465
91e9e2f579 Few changes
Added --match-forced flag
2025-04-30 19:32:28 +05:30
chu23465
f7bdd003c4
Update README.md 2025-04-30 19:25:51 +05:30
chu23465
bb3087750f Fix for m3u8 ATVP + DSNP 2025-04-30 18:44:40 +05:30
chu23465
b3601b2d12
Update README.md 2025-04-30 10:58:57 +05:30
chu23465
e018a7c615 Small fix 2025-04-30 10:36:32 +05:30
chu23465
e3eb83b4ae Another fix for ATVP m3u8 2025-04-30 09:48:47 +05:30
chu23465
bfcb36734b Small change 2025-04-30 08:44:32 +05:30
chu23465
0ba20cf636 Small change 2025-04-30 08:40:09 +05:30
chu23465
3dc940ac53 Small change 2025-04-30 08:34:34 +05:30
chu23465
34bf93c2d5 Fix
Fix for getting CSRF token for Germany region (Amazon)
2025-04-30 08:31:28 +05:30
chu23465
a80eb98e36 Few changes 2025-04-30 07:56:03 +05:30
chu23465
ece2b50cad Fix ATVP m3u8 parsing error. 2025-04-30 07:14:14 +05:30
chu23465
9f2b9c2788 Hybrid for DSNP (Beta) 2025-04-29 17:36:04 +05:30
chu23465
71f248eed0 DSNP service should be more stable with proxies now 2025-04-29 16:33:26 +05:30
chu23465
b1454b8b95 Few changes 2025-04-29 15:52:57 +05:30
chu23465
4eeda5414d DV+HDR test 2025-04-29 15:32:44 +05:30
chu23465
46fb3e7adf Netflix changes, add delay for subs 2025-04-29 12:35:39 +05:30
chu23465
fca2266d33 Few more netflix related changes 2025-04-29 11:54:44 +05:30
chu23465
18d70442e3 Few changes to netflix service 2025-04-29 10:58:07 +05:30
chu23465
91b146ca5b Max fixes, disable insecure request warning 2025-04-29 10:07:55 +05:30
chu23465
4443e0fe11 Fix for -vb track selection 2025-04-29 09:40:37 +05:30
chu23465
5ca267d05f Hotstar fix for track selection 2025-04-29 09:04:01 +05:30
chu23465
bbdcbb64b6 Small fix 2025-04-29 08:29:00 +05:30
chu23465
e4a0e7aab6
Update README.md 2025-04-29 08:27:24 +05:30
chu23465
a9207d263d Changes
Added custom m3u8 parser for DSNP.
Hybrid DV+HDR is to be tested.
Few miscellaneous fixes.
2025-04-29 08:16:38 +05:30
chu23465
4a773d1db0 Features
Max audio compatibility added. Changes for Hybrid DV/HDR.
2025-04-24 17:24:28 +05:30
chu23465
4fcd0748c0
Update README.md 2025-04-24 17:21:47 +05:30
chu23465
f42208cce0
Merge pull request #39 from MrHulk02/dev
SUNNXT service by MrHulk02
2025-04-24 10:45:07 +05:30
MrHulk
5da8c7ff21
sunnxt service 2025-04-24 10:30:43 +05:30
MrHulk
3eab5807a4
sunnxt service 2025-04-24 10:30:39 +05:30
chu23465
d8b223b184 Update tracks.py 2025-04-22 21:43:01 +05:30
chu23465
62030d9527 Minor changes 2025-04-22 21:37:18 +05:30
chu23465
3e3fb73516 Small fixes 2025-04-21 22:04:20 +05:30
chu23465
5f98f329af Feature and a fix
Fix: --atmos works now with AMZN even if not -q 2160 or -r DV
Feature: Selecting more than 1 track based on channels or codec. Not complete
2025-04-19 22:57:19 +05:30
chu23465
62972b20cf
Update README.md 2025-04-19 14:59:44 +05:30
chu23465
280c280d6d Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-19 00:12:51 +05:30
chu23465
1ace8c7e32 Few changes 2025-04-19 00:12:45 +05:30
chu23465
0a5c89bc07
Merge pull request #25 from Sihht/fix-codec
Fix codec
2025-04-18 23:17:34 +05:30
chu23465
22c0489a47
Merge branch 'dev' into fix-codec 2025-04-18 23:15:55 +05:30
chu23465
dd71f707f6 Changes
Implemented track download skip if file already exists.
A few Linux support changes.
Implemented caching cookies to profile cookies path.
2025-04-18 23:13:05 +05:30
chu23465
bffc9b0d7a removing some bad code 2025-04-18 18:13:11 +05:30
Sihht
98db15454c
Update dl.py 2025-04-17 11:30:55 -05:00
Sihht
16f267052d
fix audio codec 2025-04-17 10:54:12 -05:00
Sihht
31e81432e3
audio codec fix 2025-04-17 10:51:20 -05:00
chu23465
bfbda1dff4 Small change to resume download 2025-04-17 17:29:39 +05:30
chu23465
93fdfebc69 Update dl.py 2025-04-17 16:09:39 +05:30
chu23465
61d21e94e3 Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-17 15:53:57 +05:30
chu23465
59c6e7fb92 Update paramountplus.py 2025-04-17 15:53:39 +05:30
chu23465
6553e7f19e
Update README.md 2025-04-17 14:17:48 +05:30
chu23465
cf9f1cee2e
Update README.md 2025-04-17 14:07:53 +05:30
chu23465
3b1bbdb7fd Fixes for HULU 4K and HS AVC 4K 2025-04-17 14:03:47 +05:30
chu23465
3efc534b10 Another fix for HULU 2025-04-17 11:03:32 +05:30
chu23465
31aa7d19c7 Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-17 10:02:35 +05:30
chu23465
ac7c9902fb Small fix for HULU 2025-04-17 10:02:21 +05:30
chu23465
335cd2e567
Update README.md 2025-04-17 08:43:20 +05:30
chu23465
162fa17637
Update README.md 2025-04-17 08:26:36 +05:30
chu23465
163ac2096f
Update README.md 2025-04-16 22:49:32 +05:30
chu23465
1867fb9e5e Proper fix for DSNP
DSNP service is currently stable
2025-04-16 21:47:14 +05:30
chu23465
b01bbdb53f
Update README.md 2025-04-16 21:24:24 +05:30
chu23465
fc4bef6318
Update README.md 2025-04-16 21:15:35 +05:30
chu23465
3917330be7
Update README.md 2025-04-16 19:40:12 +05:30
chu23465
f2cf356418 Trial fix 2025-04-16 18:48:57 +05:30
chu23465
e3442d2793 I am Stupid 2025-04-16 18:23:59 +05:30
chu23465
b73b43163c Another possible fix for DSNP KID 2025-04-16 18:08:11 +05:30
chu23465
e3e5f7bedb Small change 2025-04-16 17:28:32 +05:30
chu23465
a8edede94b Fix for M3U8 Hotstar 2025-04-16 17:20:32 +05:30
chu23465
5b88624051 PyInstaller support
Currently builds but errors while executing
2025-04-16 16:09:35 +05:30
chu23465
5827a968b2 Add a max speed if Akamai for N_m3u8 2025-04-16 15:30:41 +05:30
chu23465
cd5a956e62
Update README.md 2025-04-16 15:26:41 +05:30
chu23465
8e7e45c14a Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-16 14:56:40 +05:30
chu23465
fe942132dd Support paths with spaces in them for N_m3u8 2025-04-16 14:56:36 +05:30
chu23465
cc30f5a6f6
Update README.md 2025-04-16 14:39:58 +05:30
chu23465
0b240e7308 Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-16 13:51:10 +05:30
chu23465
e1aeee8d36 Fix for DSNP incorrect kid 2025-04-16 13:51:04 +05:30
chu23465
9940292783
Update README.md 2025-04-15 19:16:44 +05:30
chu23465
11108223bc ISM Atmos fix for Amazon 2025-04-15 19:07:41 +05:30
chu23465
999c73d1e6 Remove base64 encode 2025-04-14 11:56:35 +05:30
chu23465
78b781e794 Debug license 2025-04-14 11:10:51 +05:30
chu23465
616ab317c1 Integrate subby 2025-04-14 08:39:57 +05:30
chu23465
0c92977f5a Remove decode in license challenge (DSNP) 2025-04-14 08:38:38 +05:30
chu23465
5b8c9a0fb2 Added a strip-sdh flag 2025-04-14 08:27:08 +05:30
chu23465
ca9c9a0cf8 Another DSNP fix 2025-04-13 15:19:36 +05:30
chu23465
3b894cd31a
Update disneyplus.py 2025-04-13 13:22:20 +05:30
chu23465
375320f0c4
Update README.md 2025-04-13 07:55:35 +05:30
chu23465
4eb7ddedb8 Fix DSNP 2025-04-13 07:52:20 +05:30
chu23465
c9e04e3499 Wrong check 2025-04-13 07:30:15 +05:30
chu23465
dc9cfc4676 Another fix for DSNP 2025-04-13 07:28:11 +05:30
chu23465
3bb39a9b64 Fix DSNP error and vbitrate error 2025-04-13 07:03:31 +05:30
chu23465
762610427e Another fix for vbitrate min 2025-04-13 05:26:43 +05:30
chu23465
56769a537d Small fix for bitrate select 2025-04-13 03:31:20 +05:30
chu23465
debc33f62e Added DSNP service 2025-04-13 03:22:05 +05:30
chu23465
f90eaff39b License decode fix 2025-04-12 23:18:27 +05:30
chu23465
5290850dc6 Fix vbitrate == "min" 2025-04-11 06:26:41 +05:30
chu23465
ca46bd4906
Update README.md 2025-04-10 11:06:46 +05:30
chu23465
21c327a6aa
Update README.md 2025-04-10 11:06:30 +05:30
chu23465
2f6e521d75 Added Cache path clear for AMZN 2025-04-10 01:03:28 +05:30
chu23465
8a08433ff8 Fixed NoneType vbitrate error 2025-04-10 01:01:19 +05:30
chu23465
8e1387db8e
Update README.md 2025-04-09 12:05:36 +05:30
chu23465
633653807e Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-09 11:57:54 +05:30
chu23465
dffb6afec9 Added a minimum video bitrate select feature 2025-04-09 11:57:44 +05:30
chu23465
70d56c1000
Update README.md 2025-04-09 11:39:50 +05:30
chu23465
d7b3b4ff81 Merge branch 'dev' of https://github.com/chu23465/VT-PR into dev 2025-04-09 11:37:26 +05:30
chu23465
49d2dcffd2 Added --closest-resolution flag 2025-04-09 11:37:22 +05:30
chu23465
b439e4ea07
Update README.md 2025-04-09 11:34:31 +05:30
chu23465
46ef234784
Update README.md
Added credits
2025-04-09 11:17:16 +05:30
chu23465
05a9fbd497
Update README.md 2025-04-09 11:04:40 +05:30
chu23465
35dfeb8f7d Fixed 640kbps audio AMZN 2025-04-09 10:38:13 +05:30
chu23465
88a682d1ef Changes
Possible fix for Amazon refresh token
Basic install.sh for Linux
Removed unnecessary requirements.txt
2025-04-09 07:13:09 +05:30
chu23465
1d94ecef7f Update README.md
Added more To-Do, some organization
2025-04-09 07:13:09 +05:30
Aswin
5ac64b0348 Fixed Amazon
Amazon should be back to normal
2025-04-08 18:28:33 +05:30
Aswin
0bc8c533fd Error
This version is erroring for Amazon. Hotstar still works fine. Committed a few more devices.
2025-04-08 09:48:20 +05:30
chu23465
9e7f45a42e
Update README.md
Small change to proxy instructions.
2025-04-07 19:40:35 +05:30
Aswin
73d53c3efe Changes
Added a `--latest-episode` flag. Updated README to include correct ASIN display script. Fixed MAX seasons length.
2025-04-07 19:31:40 +05:30
Aswin
09e03b47f7 Hotstar with DRM
I forgot to test Hotstar with DRM. Made changes to support DRM correctly.
2025-04-07 00:45:16 +05:30
chu23465
1c15375821
Update README.md 2025-04-06 21:52:59 +05:30
chu23465
15fee8704b
Update README.md
Hotstar information
2025-04-06 21:51:56 +05:30
Aswin
dd67896a9f Small change 2025-04-06 20:53:27 +05:30
Aswin
96e36e7a05 Hotstar
Hotstar (HS) service has been tested and added.
2025-04-06 20:27:22 +05:30
chu23465
4b4ed735ef
Update pyproject.toml
Forgot to resolve dependencies. Updated - should work now.
2025-04-03 17:01:25 +05:30
chu23465
4c82b3a29e
Update README.md
Grammar
2025-04-01 14:10:40 +05:30
Aswin
fb7b303760 Small change 2025-04-01 02:43:03 +05:30
Aswin
ecb26968da Start support for Hotstar
Hotstar is currently throwing errors when getting tracks, will be fixed with next commit.
Fixed error when loading Widevine Device.
2025-04-01 02:41:59 +05:30
chu23465
ddcb82a853 Update README.md
Proxy stuff
Update README.md - Grammar
Update issue templates
Update amazon.py
Star history dark theme
2025-04-01 00:56:19 +05:30
Aswin
05ed9d57df Changes
Add default device
Switch to requests to get manifest.
Import peacock service
2025-03-23 09:56:39 +05:30
chu23465
82cfd1c484
Update README.md 2025-03-21 00:41:14 +05:30
Aswin
d7c4f1e71f Merge remote-tracking branch 'origin/dev' into dev 2025-03-21 00:39:53 +05:30
Aswin
13a9a72b80 Linux support WIP 2025-03-21 00:39:48 +05:30
chu23465
ba1963a03e
Update README.md 2025-03-20 17:01:46 +05:30
chu23465
a8f2f9bf62
Update to default vquality to HD in amazon.py 2025-03-20 02:46:37 +05:30
chu23465
0e1b8a1f54
Fix error max.py 2025-03-20 01:41:02 +05:30
271 changed files with 44433 additions and 8356 deletions

21
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,21 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: chu23465
---
**Describe the bug**
A clear and concise description of what the bug is.
Command used - `[your command]`
**Log**
Add the appropriate log(s) from `vinetrimmer/Logs/` directory. Either add them as text to the issue like below:
```
......Log......
```
Or upload them as attachments to the issue.

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

5
.gitignore vendored
View File

@ -1,11 +1,11 @@
@ -1,197 +0,0 @@
/misc/ /misc/
/Temp/ /Temp/
/Downloads/ /Downloads/
/vinetrimmer/Cache/ /vinetrimmer/Cache/
/vinetrimmer/Cookies/ /vinetrimmer/Cookies/
/vinetrimmer/Logs/ /vinetrimmer/Logs/
.DS_Store
key_store.db
# Created by https://www.toptal.com/developers/gitignore/api/python # Created by https://www.toptal.com/developers/gitignore/api/python
# Edit at https://www.toptal.com/developers/gitignore?templates=python # Edit at https://www.toptal.com/developers/gitignore?templates=python
@ -183,7 +183,6 @@ pyrightconfig.json
# End of https://www.toptal.com/developers/gitignore/api/python # End of https://www.toptal.com/developers/gitignore/api/python
devices/
scalable/43.xml scalable/43.xml
scalable/40.xml scalable/40.xml

View File

@ -2,6 +2,9 @@
<module type="PYTHON_MODULE" version="4"> <module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager"> <component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$"> <content url="file://$MODULE_DIR$">
<sourceFolder url="file://$MODULE_DIR$/scripts/protobuf3" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/scripts/pyplayready" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/scripts/pywidevine" isTestSource="false" />
<excludeFolder url="file://$MODULE_DIR$/.venv" /> <excludeFolder url="file://$MODULE_DIR$/.venv" />
</content> </content>
<orderEntry type="jdk" jdkName="Python 3.10" jdkType="Python SDK" /> <orderEntry type="jdk" jdkName="Python 3.10" jdkType="Python SDK" />

26
.idea/runConfigurations/poetry.xml generated Normal file
View File

@ -0,0 +1,26 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="poetry" type="PythonConfigurationType" factoryName="Python">
<module name="PlayReady-Amazon-Tool-main" />
<option name="ENV_FILES" value="" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
<envs>
<env name="PYTHONUNBUFFERED" value="1" />
</envs>
<option name="SDK_HOME" value="" />
<option name="SDK_NAME" value="Poetry (PlayReady-Amazon-Tool-main)" />
<option name="WORKING_DIRECTORY" value="$PROJECT_DIR$/" />
<option name="IS_MODULE_SDK" value="false" />
<option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" />
<EXTENSION ID="PythonCoverageRunConfigurationExtension" runner="coverage.py" />
<option name="SCRIPT_NAME" value="vinetrimmer.py" />
<option name="PARAMETERS" value="dl --no-cache --keys AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4" />
<option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" />
<option name="MODULE_MODE" value="false" />
<option name="REDIRECT_INPUT" value="false" />
<option name="INPUT_FILE" value="" />
<method v="2" />
</configuration>
</component>

View File

@ -1,5 +1,11 @@
https://tv.apple.com/us/show/ray-donovan/umc.cmc.hr7pnm1wbx98w1h3pg7dfbey
https://tv.apple.com/us/show/party-down/umc.cmc.6myol1kgcd19kerlujhtcr8kg
https://tv.apple.com/us/show/mythic-quest/umc.cmc.1nfdfd5zlk05fo1bwwetzldy3 https://tv.apple.com/us/show/mythic-quest/umc.cmc.1nfdfd5zlk05fo1bwwetzldy3
https://tv.apple.com/us/show/the-completely-made-up-adventures-of-dick-turpin/umc.cmc.37r7vskzmm8hk2pfbzaxlcwzg https://tv.apple.com/us/show/the-completely-made-up-adventures-of-dick-turpin/umc.cmc.37r7vskzmm8hk2pfbzaxlcwzg
https://tv.apple.com/us/show/the-office-superfan-episodes/umc.cmc.3r3om9j6edlrnznl5pfassikv
https://tv.apple.com/us/show/trailer-park-boys-the-swearnet-show/umc.cmc.71tbyxchxiwotaysuuztm8p54
https://tv.apple.com/us/show/fridays/umc.cmc.ve44y99fmo41lok4mx7azvfi
https://tv.apple.com/us/show/utopia/umc.cmc.4uzbqvarwjrbkqz92796oelqj
https://tv.apple.com/us/movie/oceans-eleven/umc.cmc.4mt9j4jqou4mlup1pc9riyo63 https://tv.apple.com/us/movie/oceans-eleven/umc.cmc.4mt9j4jqou4mlup1pc9riyo63
https://tv.apple.com/us/movie/bullet-train/umc.cmc.5erhpztw3spfkfi0daabkmaq0 https://tv.apple.com/us/movie/bullet-train/umc.cmc.5erhpztw3spfkfi0daabkmaq0
@ -77,9 +83,3 @@ https://tv.apple.com/us/show/the-state/umc.cmc.5af6lx6evkseyhotjzhr16oot | The S
https://tv.apple.com/us/show/upright-citizens-brigade/umc.cmc.638n6gvt13rg3w8g24h1chmdr | Upright Citizens Brigade - Apple TV https://tv.apple.com/us/show/upright-citizens-brigade/umc.cmc.638n6gvt13rg3w8g24h1chmdr | Upright Citizens Brigade - Apple TV
https://tv.apple.com/us/show/fridays/umc.cmc.ve44y99fmo41lok4mx7azvfi | Fridays - Apple TV https://tv.apple.com/us/show/fridays/umc.cmc.ve44y99fmo41lok4mx7azvfi | Fridays - Apple TV
https://tv.apple.com/us/show/drunk-history/umc.cmc.2fai5tmqz2z6g9iy8er8ft11m | Drunk History - Apple TV https://tv.apple.com/us/show/drunk-history/umc.cmc.2fai5tmqz2z6g9iy8er8ft11m | Drunk History - Apple TV
https://tv.apple.com/us/show/ray-donovan/umc.cmc.hr7pnm1wbx98w1h3pg7dfbey
https://tv.apple.com/us/show/party-down/umc.cmc.6myol1kgcd19kerlujhtcr8kg
https://tv.apple.com/us/show/the-office-superfan-episodes/umc.cmc.3r3om9j6edlrnznl5pfassikv
https://tv.apple.com/us/show/trailer-park-boys-the-swearnet-show/umc.cmc.71tbyxchxiwotaysuuztm8p54
https://tv.apple.com/us/show/fridays/umc.cmc.ve44y99fmo41lok4mx7azvfi
https://tv.apple.com/us/show/utopia/umc.cmc.4uzbqvarwjrbkqz92796oelqj

Binary file not shown.

510
README.md
View File

@ -1,23 +1,507 @@
Hi # VineTrimmer-PlayReady
, I'm PlayReady A tool to download and remove DRM from streaming services. Modified to remove Playready DRM in addition to Widevine DRM.
The name `VineTrimmer` comes from `Vine` as in `WideVine` and `Trimmer` as in remove.
## Read the README thoroughly atleast twice. I cannot stress how important this is. There is a reason why this README is so verbose.
## This project is under active development. Expect bugs and errors.
## Disclaimer!!!
This project is ONLY for educational/archival/personal purposes. I do not condone piracy in any form.
By using this project you agree that:
`The developer shall not be held responsible for any account suspensions, terminations, penalties or legal action taken/imposed by third-party platforms. The User acknowledges and agrees that they are solely responsible for complying with all terms, policies, copyright and guidelines of any such platforms.`
I AM NOT taking credit for the entirety of this project. This project is based on a version of an old fork of [devine](https://github.com/devine-dl/devine) that was found floating around online. I AM taking credit for about 20% of the additional stuff that I personally worked on.
Support for Sport replays (VOD) or live streams is not planned. It's a whole thing with OTT panels and restreaming and whatnot. It's a can of worms that I don't plan on opening.
## Supporters
[@m41c0n](https://github.com/m41c0n)
## Features
- Progress Bars for decryption ([mp4decrypt](https://github.com/chu23465/bentoOldFork), Shaka)
- Reprovision .prd automatically after 2 days
- ISM manifest support (Microsoft Smooth Streaming) (WIP/Experimental)
- N_m3u8DL-RE downloader support (Experimental)
- Atmos audio with ISM manifest (Amazon) is Fixed
- Resume failed download has been implemented. If a track has been successfully downloaded previously and exists in `Temp` directory (encrypted or decrypted), VT will not download said track again.
- Hybrid creation with [dovi_tool](https://github.com/quietvoid/dovi_tool/). This feature is in Beta. Only tested so far on DisneyPlus. Needs more work. Ex: filenaming needs correction, temp directory is a mess after hybrid creation, use another tool insteal of `dovi_tool` to get Profile 8.1 DV-HDR10+ instead of DV Profile 5 HDR10 compatible.
This is me Sofiya, I am posting this to show how we can use SL2000 Certificate to do amazon using Playready drm. ## Usage
"---Always Work Hard and Trust the Process---"
Amazon Demonstration using SL2000 ### Windows
1. Install Microsoft Visual C++ Redistributable - [link](https://aka.ms/vs/17/release/vc_redist.x64.exe).
WE have all Certificates SL2000 & SL 30000 2. Ensure Python is installed in your system (cannot be from the the Microsoft Store). Refer to [link](https://www.python.org/downloads/) or on Ubuntu -> `sudo apt install python3`. I recommend python 3.10.11 (or higher). Python 3.13 does not work.
We have all codes to disney and all sites
This is posted to punish the people who are making playready easy available
If you wanna collabrate & need support mail us on Playreadydrm@proton.me 3. Make sure git is installed in your system by running `git --version`. If not refer to [link](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
Update : API USED IN THIS IS DOWN DUE TO DDOS 4. Choose a branch, either `dev` or `main`. Use below command to download. (Recommended instead of downloading zip)
Command used ```bash
git clone -b <branch-name> --single-branch https://github.com/chu23465/VT-PR
```
poetry run vt dl -al en -sl en -q 1080 Amazon -b cbr -vq hd 0NRT15S2XG06SG5HBV5NQAW3E3 5. Navigate and find `install.bat`.
6. Run `install.bat`.
7. Activate venv using `venv.cmd`.
8. Run desired command using poetry.
### Linux
This is in beta for Linux.
Command:
```
wget https://github.com/chu23465/VT-PR/raw/refs/heads/dev/install.sh && chmod +x install.sh && bash install.sh
```
## Updating
1. Backup your `vinetrimmer/Cookies/`, `vinetrimmer/Cache/`, `Downloads` directories just in case.
2. Open a command prompt and navigate your `VT-PR` directory.
3. Recall the branch you downloaded and modify below command accordingly:
```bash
git pull origin <branch-name>
```
Make sure `git pull` is successful. If not do `git stash` and try again.
https://github.com/Playreadydrm/PlayReady-Amazon-Tool/assets/170321722/1fdacab6-d1db-41f4-82f6-a73b5e1286c8 ### Config
`vinetrimmer.yml` located within the `/vinetrimmer/` folder.
`decryptor:` either `mp4decrypt` or `packager`
(shaka-packager fails to decrypt files downloaded from ISM/Microsoft Smooth Streaming manifests)
`tag:` tag for your release group
CDM can be configured per service or per profile.
```
cdm:
default: {text}
Amazon: {text}
```
All other option can be left to defaults, unless you know what you are doing.
### General Options
Usage: vt.cmd [OPTIONS] COMMAND [ARGS]...
Options:
| Command line argument | Description | Default Value |
|----------------------------|-----------------------------------------------------------------------------------------------|-----------------------------------|
| -d, --debug | Flag to enable debug logging | False |
| -p, --profile | Profile to use when multiple profiles are defined for a service. | "default" |
| -q, --quality | Download Resolution ie Height of Video Track wanted | 1080 |
| -cr, --closest-resolution | If resolution specified is not found, defaults to closest resolution available | False |
| -v, --vcodec | Video Codec | H264 |
| -a, --acodec | Audio Codec | None |
| -vb, --vbitrate | Video Bitrate, `Min` or a number based on output of --list | Max |
| -ab, --abitrate | Audio Bitrate | Max |
| -ac, --audio-channels | Select Audio by Channels Configuration, e.g `2.0`, `5.1`, `2.0,5.1` | None |
| -mac, --max-audio-compatability | Select multiple audios for maximum compatibility with all devices | False |
| -aa, --atmos | Prefer Atmos Audio | False |
| -r, --range | Video Color Range `HDR`, `HDR10`, `DV`, `SDR` | SDR |
| -w, --wanted | Wanted episodes, e.g. `S01-S05,S07`, `S01E01-S02E03`, `S02-S02E03` | Default to all |
| -le, --latest-episode | Download only the latest episode on episodes list | False |
| -al, --alang | Language wanted for audio. | Defaults to original language |
| -sl, --slang | Language wanted for subtitles. | Defaults to original language |
| --proxy | Proxy URI to use. If a 2-letter country is provided, it will try get a proxy from the config. | None |
| -A, --audio-only | Only download audio tracks. | False |
| -S, --subs-only | Only download subtitle tracks. | False |
| -C, --chapters-only | Only download chapters. | False |
| -ns, --no-subs | Do not download subtitle tracks. | False |
| -na, --no-audio | Do not download audio tracks. | False |
| -nv, --no-video | Do not download video tracks. | False |
| -nc, --no-chapters | Do not download chapters tracks. | False |
| -ad, --audio-description | Download audio description tracks. | False |
| --list | Skip downloading and list available tracks and what tracks would have been downloaded. | False |
| --selected | List selected tracks and what tracks are downloaded. | False |
| --cdm | Override the CDM that will be used for decryption. | None |
| --keys | Skip downloading, retrieve the decryption keys (via CDM or Key Vaults) and print them. | False |
| --cache | Disable the use of the CDM and only retrieve decryption keys from Key Vaults. If a needed key is unable to be retrieved from any Key Vaults, the title is skipped.| False |
| --no-cache | Disable the use of Key Vaults and only retrieve decryption keys from the CDM. | False |
| --no-proxy | Force disable all proxy use. | False |
| -nm, --no-mux | Do not mux the downloaded and decrypted tracks. | False |
| --mux | Force muxing when using --audio-only/--subs-only/--chapters-only. | False |
| -ss, --strip-sdh | Stip SDH subtitles and convert them to CC. Plus fix common errors. | False |
| -mf, --match-forced | Only select forced subtitles matching with specified audio language | False |
| -?, -h, --help | Show this message and exit. | False |
Currently supported platforms:
COMMAND :-
| Alaias | Command | Service Link |
|--------|---------------|--------------------------------------------|
| AMZN | Amazon | https://amazon.com, https://primevideo.com |
| ATVP | AppleTVPlus | https://tv.apple.com |
| DSNP | DisneyPlus | https://disneyplus.com/ |
| HS | Hotstar | https://www.hotstar.com/ |
| HULU | Hulu | https://hulu.com |
| iT | iTunes | https://itunes.apple.com |
| MAX | Max | https://max.com |
| PCOK | Peacock | https://peacocktv.com/ |
Untested or not fully implemeted services:
| Alaias | Command | Service Link |
|--------|-----------------|-----------------------------|
| JC | JioCinema | https://www.jiocinema.com |
| MA | MoviesAnywhere | https://moviesanywhere.com |
| NF | Netflix | https://netflix.com |
| PMTP | ParamountPlus | https://paramountplus.com |
| SL | SonyLiv | https://sonyliv.com |
### Amazon Specific Options
Usage: vt.cmd AMZN [OPTIONS] [TITLE]
Service code for Amazon VOD (https://amazon.com) and Amazon Prime Video (https://primevideo.com).
Authorization: Cookies
Security:
```
UHD@L1/SL3000
FHD@L3(ChromeCDM)/SL2000
SD@L3
Certain SL2000 can do UHD
```
Maintains their own license server like Netflix, be cautious.
Region is chosen automatically based on domain extension found in cookies.
Prime Video specific code will be run if the ASIN is detected to be a prime video variant.
Use 'Amazon Video ASIN Display' for Tampermonkey addon for ASIN
https://greasyfork.org/en/scripts/496577-amazon-video-asin-display
Below flags to be passed after the `AMZN` or `Amazon` keyword in command.
ARGS:
| Command Line Switch | Description |
|-------------------------------------|-----------------------------------------------------------------------------------------------------|
| -b, --bitrate | Video Bitrate Mode to download in. CVBR=Constrained Variable Bitrate, CBR=Constant Bitrate. (CVBR or CBR or CVBR+CBR) |
| -c, --cdn | CDN to download from, defaults to the CDN with the highest weight set by Amazon. |
| -vq, --vquality | Manifest quality to request. (SD or HD or UHD) |
| -s, --single | Force single episode/season instead of getting series ASIN. |
| -am, --amanifest | Manifest to use for audio. Defaults to H265 if the video manifest is missing 640k audio. (CVBR or CBR or H265) |
| -aq, --aquality | Manifest quality to request for audio. Defaults to the same as --quality. (SD or HD or UHD) |
| -ism, --ism | Set manifest override to SmoothStreaming. Defaults to DASH w/o this flag. |
| -?, -h, --help | Show this message and exit. |
To get Atmos/UHD/4k with Amazon, navigate to -
```
https://www.primevideo.com/mytv
```
Remember that not all titles have 4K/Atmos/HDR/DV.
Login and get to the code pair page. Extract cookies from that page using [Open Cookies.txt](https://chromewebstore.google.com/detail/open-cookiestxt/gdocmgbfkjnnpapoeobnolbbkoibbcif).
Save it to the path `vinetrimmer/Cookies/Amazon/default.txt`. Pay attention to path if you are on Linux. Path is case sensitive.
When caching cookies, use a profile without PIN. Otherwise it may cause errors.
If you are facing 403 or 400 errors even after saving fresh cookies and clearing `Cache` folder, try logging out of your Amazon account in the browser and logging back in. Then save cookies.
Some titles say `UHD/2160p` is available and if VT is saying `no 2160p track available`, then `UHD/2160p` is only available via renting. As in some titles advertise UHD but UHD will not be available to PrimeVideo customers. You will have to rent the title using the Rent button on the title page in UHD quality.
If you are getting an `AssertionError` with Amazon, then try reprovisioning the device. I have included a batch script in the `vinetrimmer/devices/` directory to do this. Simply execute the script and try again.
If you are getting `TooManyDevices` error or Amazon is giving you trouble with some weird error, then logout in the browser, log back in, extract and use fresh cookies. Try also deleting `vinetrimmer/Cache/AMZN/`.
If you want to try a different CDM, you will need the corresponding DeviceTypeID (DTID) put into `amazon.yml`. As far as I know, you would need to sniff the traffic from the device (with the CDM) to get the DTID.
If you are getting `PRS.NoRights` error, then there are 3 possible explantations for it. One, CDM simply needs to be reprovisioned. Two, you are using the incorrect DTID for the given CDM. Three, the Amazon has revoked or downgraded the CDM to only HD/SD quality.
If your region has ad-free subscription tier, you will need the ad-free subscription tier for 4K/HDR/DV.
Newer titles only have 4k in ISM manifest. So you will need to use the `--ism` flag.
### Peacock
- PCOK bans leaked certs quickly (for 4k), be cautious.
- Authorization - cookies saved to `vinetrimmer/Cookies/Peacock/default.txt`
### Hotstar
- To use, login to Hotstar and navigate to https://www.hotstar.com/{region}/home. Extract cookies from that page and save to path `vinetrimmer\Cookies\Hotstar\default.txt` (Case sensitive).
- Otherwise add credentials to `vinetrimmer.yml`. An example is given.
- A free account has access to lots of content.
- Hotstar requires an Indian (+91) phone number to signup to Indian region, even for free account.
- Hates VPN's, get a residential proxy of some sort.
- All content is licensed via Widevine L3 or has no DRM.
### DisneyPlus
- Needs only credentials added to `vinetrimmer.yml`.
- Requires you to use `-m` or `--movie` flag if you are downloading a movie. Append flag to end of your command.
- From my testing, when using with VPN, it causes lots of issues, mainly needing to clear `Cache` folder and login repeatedly. Use residential proxies if available. Don't hammer service. Try waiting a minute or two before logging in again.
- If you are getting `No 2160p track found` error for a title you know has 4k, then try passing `-r DV` or `-r HDR`. Make sure your account can access highest qualities.
- Should be more stable now when using proxy. But do be careful. We don't use proxy for downloading segments, which means your IP could get temporarily banned from DSNP servers (i.e persistent 403 errors). If you download the same title multiple times or many titles/episodes at once/too quickly your IP address could get banned. Happened to me while testing.
### Hulu
- Authorization: cookies saved to `vinetrimmer/Cookies/Hulu/default.txt`
- Windscribe VPN sometimes fails. Simply try again.
### iTunes
```
Authorization: Cookies saved to default.txt
Security: UHD@L1 FHD@L1 HD@L1 SD@L3
```
This is iTunes via rential channel on AppleTVPlus.
Login to iTunes in a browser. Try playing a movie. It'll redirect you to `tv.apple.com`. Cache cookies from that page to `vinetrimmer/Cookies/iTunes/default.txt`.
Requires you to use `-m` or `--movie` flag if you are downloading a movie. Append flag to end of your command.
### Example Command
Amazon Example:
```bash
poetry run vt dl -al en -sl all --selected -q 2160 -r HDR -w S01E18-S01E25 AMZN -b CBR --ism 0IQZZIJ6W6TT2CXPT6ZOZYX396
```
Above command:
- gets english audio,
- gets all available subtitles,
- selects the HDR + 4K track,
- gets episodes from S01E18 to S01E25 from Amazon
- with CBR bitrate,
- tries to force ISM
- and the title-ID is 0IQZZIJ6W6TT2CXPT6ZOZYX396
AppleTV Example:
```bash
poetry run vt dl -al en,it -sl en,es -q 720 --proxy http://192.168.0.99:9766 -w S01E01 ATVP umc.cmc.1nfdfd5zlk05fo1bwwetzldy3
```
Above command:
- gets english, italian audio
- gets english, spanish subtitles,
- lists all possible qualities,
- selects 720p video track,
- uses the proxy for licensing,
- gets the first episode of first season (i.e S01E01)
- of the title-ID umc.cmc.1nfdfd5zlk05fo1bwwetzldy3
Max Example:
```bash
poetry run vt dl -al en -sl en --keys --proxy http://192.168.0.99:9766 MAX https://play.max.com/show/5756c2bf-36f8-4890-b1f9-ef168f1d8e9c
```
Above command:
- gets english subtitles + audio,
- skips download and only gets the content keys,
- from MAX
- uses specified proxy
- defaulting to HD for video
- title-ID is 5756c2bf-36f8-4890-b1f9-ef168f1d8e9c
Hotstar Example:
```bash
poetry run vt dl -al en -sl en -q 2160 -v H265 HS https://www.hotstar.com/in/movies/hridayam/1260083403
```
Above command:
- gets english subtitles + audio,
- sets video codec to H265 codec,
- sets video quality (ie. resolution) to 2160p,
- Gets highest quality video/audio available.
- title-ID is 1271342309
## Max Audio Compatability (MAC)
I have added a special flag called `--max-audio-compatability` or `-mac` for maximum compatibility with all devices. If passed with `--acodec aac,ec3 -ac 2.0,5.1` will select 3 audios like below
```
2025-04-24 16:54:23 [I] Tracks : ├─ AUD | [E-AC3] | [ec-3] | 5.1 | 640 kb/s | en-US | [Original]
2025-04-24 16:54:23 [I] Tracks : ├─ AUD | [E-AC3] | [ec-3] | 2.0 | 224 kb/s | en-US | [Original]
2025-04-24 16:54:23 [I] Tracks : ├─ AUD | [AAC] | [mp4a] | 2.0 | 128 kb/s | en-US | [Original]
```
If `-mac` not passed but only `--acodec aac,ec3 -ac 2.0,5.1` passed, will select 2 audios.
```
2025-04-24 17:10:04 [I] Tracks : ├─ AUD | [E-AC3] | [ec-3] | 5.1 | 640 kb/s | en-US | [Original]
2025-04-24 17:10:04 [I] Tracks : ├─ AUD | [AAC] | [mp4a] | 2.0 | 128 kb/s | en-US | [Original]
```
## Proxy
I recommend [Windscribe](https://windscribe.com/). You can sign up, getting 10 GB of traffic credit every month for free. We use the VPN for everything except downloading video/audio.
Tested so far on Amazon, AppleTVPlus, Max, and DisneyPlus.
### Steps:
1. For each service, within get_tracks() function we do this below. Below is only when you are integrating a new service yourself. Set needs_proxy to True if your service needs proxy to get manifest (Ex - Netflix, Hotstar).
```python
for track in tracks:
track.needs_proxy = False
```
This flag signals that this track does not need a proxy and a proxy will not be passed to downloader even if proxy given in CLI options.
2. Download Windscribe app and install it.
3. Go to `Options` -> `Connection` -> `Split Tunneling`. Enable it.
Set `Mode` as `Inclusive`.
5. Go to `Options` -> `Connection` -> `Proxy Gateway`. Enable it. Select `Proxy Type` as `HTTP`.
Copy the `IP` field (will look something like `192.168.0.141:9766`).
Pass above copied to Vinetrimmer with the proxy flag like below.
```bash
...(other flags)... --proxy http://192.168.0.141:9766 .......
```
If you are using other VPNs, extract the proxy (use the browser extension to do this). It will look something like `http(s)://username:pass@host/IP:PORT`. Ex -> `https://user:pass@domain.com:443`. Pass it like below:
```bash
...(other flags)... --proxy https://user:pass@domain.com:443 .......
```
## Other
- For `--keys` to work with ATVP you need to pass the `--no-subs` flag.
- Errors arise when running VT within Docker or Conda like python environments. Make sure to use proper python3.
- To use programs in `scripts` folder, first activate venv then, then -
```bash
poetry run python scripts/ParseKeybox.py
```
- There is another way of running this instead of using `poetry`. In root folder of VT-PR there is a `vt.py` (which is essentially the same as `vinetrimmer/vinetrimmer.py`). Activate venv, then:
```bash
python vt.py dl ......(rest of the command as before).......
```
This is useful for debugging/stepping through in IDE's without having to deal with poetry.
- Nuitka compile:
- Activate venv, then
- `python -m pip install nuitka`
- Verify using command `nuitka --version`
- Then:
```bash
nuitka --standalone --output-dir=dist --windows-console-mode=force vt.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-dir=./binaries/=binaries
```
- `--standalone` will give a folder of compiled pythonic objects. Zip it to distribute. This is recommended.
- If you don't want to carry around/deal with a zip, instead use `--onefile`. This has the drawback of setting the default folders to the temp folder in whatever OS you are using. This could be fixed with some extra code but that is currently not implemented.
- Refer to [link](https://nuitka.net/user-documentation/user-manual.html) if anything errors out.
## Broken / To-Do (Descending order of priority)
- [ ] First stable release
- [ ] Shaka with progress bar repository
- [ ] Add download speed limit to avoid IP bans.
- [ ] Linux support - Debian/Ubuntu/Mint + Fix service config load for Linux.
- [ ] Single script that installs, and if already installed checks for and applies updates
- [ ] Replace poetry with uv
- [ ] Add [m4ffdecrypt](https://github.com/Eyevinn/mp4ff)
- [ ] Add a version.py
- [ ] Downloader field in config, per service.
- [ ] Make a script to download latest binaries for vt automatically at startup.
- [ ] Detect if running as Nuikta compiled binary, then in vt.py set directories relative to binary path
- [ ] Find a way to estimate final file size for a track. Check if enough space is left on disc for double the amount of selected tracks - since mp4decrypt and Nm3u8 both make copies of the files
- [ ] Merge DB script
- [ ] Latest mkvtoolnix for linux
- [ ] Modify aria2c to include a progress bar ?
- [ ] Github Actions Python script that builds and publishes release for every commit to not readme.md
- [ ] MAX - Fix HDR10/DV --list
- [ ] Fix original language (Was removed as workaround for a bug)
- [ ] Fix `-sl all` and `-al all`
- [ ] Make a windscribe.py for proxies modelled after nordvpn.py. Refer to the chrome extension for the code.
- [ ] Move to requests, curl or otherwise to download subtitles (?)
- [ ] Replace track.dv, track.hdr10 with track.PQ. Value will be an enum. This will require a major-ish rewrite.
- [ ] Netflix service is currently broken (will probably be fixed Soon™)
- [ ] Integrate [subby](https://github.com/vevv/subby)
- [ ] Licensing before download (?)
- [ ] Test and fix MoviesAnywhere, ParamountPlus, services.
- [ ] Guide for writing a service + debugging
- [ ] Implement a scan/hammer/cache keys for each service - pass string of zeros as title id. Then copy and rework dl.py to iterate over returned list of titles from scan function
### Amazon Specific
- [ ] Refresh Token for Amazon service
- [ ] Pythonic implementation of init.mp4 builder for ism manifest for avc, hvcc, dv, ac3, eac3, eac3-joc codecs
- [ ] Make a pure python requests based downloader for ISM/MSS manifest. Write init.mp4 then download each segment to memory, decrypt in memory and write it to a binary merged file. Download segments in batches. Batch size based on thread count passed to program. Download has to be sequentially written.
- [ ] `--bitrate CVBR+CBR` is currently broken
- [ ] Get highest quality CBR and CVBR MPD+ISM by default to AMZN
- [ ] Specify devices in config for MPD or ISM then load one based on command
- [ ] For videos, download init.mp4 using N_m3u8, mediainfo it to get FPS, HDR info
- [ ] Manifest url caching system for every key/Track object.
If anyone has any idea how to fix above issues, feel free to open a pull request.
## Donating
I am an independent developer right now. I work on this project in my free time. If you could support me that would be immensely helpful. All supporters will get a special mention in the README. Thank you in advance.
My BuyMeACoffee account was suspended so contact me on discord to donate.
My Discord is `@crapola` or `@chupola`. You will need to join a server I'm in to message me. Join one of the servers mentioned below.
## Credits
[@rlaphoenix](https://github.com/rlaphoenix) for [pywidevine](https://github.com/devine-dl/pywidevine)
[@rlaphoenix](https://github.com/rlaphoenix) again as he was the original developer behind the `VineTrimmer` base `Widevine` version (later renamed to `devine`) .
[@DevLARLEY](https://github.com/DevLARLEY) for [pyplayready](https://git.gay/ready-dl/pyplayready)
[@FieryFly](https://github.com/FieryFly) for an additional MAX fix.
[@vevv](https://github.com/vevv) for [subby](https://github.com/vevv/subby)
[@globocom](https://github.com/globocom/) for [m3u8](https://github.com/globocom/m3u8)
`@Wiesiek` on Discord for a few ideas
[DRM-Lab-Project](https://discord.gg/xHjetwZP) for numerous bug fixes and support.
[Playready-Discord](https://discord.gg/aNNKxurrU6) for numerous bug fixes and support.
Various members of the above mentioned Discord servers for testing, bug reporting, fixes etc. Thank You :)
[CDRM-Project](https://discord.cdrm-project.com/) and `@TPD94`, `@radizu` for getting me started on this journey, being a source of inspiration and for keeping a community well and alive.
[@m0ck69](https://github.com/m0ck69) for sharing a DisneyPlus account for testing purposes.
[@methflix](https://github.com/methflix) for sharing a Hulu account for testing purposes.
The services included here were not written by me. They were either found in the mentioned Discord servers or shared by an individual. If anyone feels like they deserve a credit in the README, open an issue and I'll add you.
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=chu23465/VT-PR&type=Date)](https://www.star-history.com/#chu23465/VT-PR&Date)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
N_m3u8DL-RE.exe http://avodsls3ww-s.akamaihd.net/ondemand/iad_2/c5a2/7992/6e31/4ed5-8011-893c8d4e98a6/0bc9f599-85c7-450d-b829-b69fb27d4bd6.ism/manifest --thread-count 96 --log-level ERROR --write-meta-json False --http-request-timeout 8

BIN
binaries/N_m3u8DL-RE.exe Normal file

Binary file not shown.

BIN
binaries/XstreamDL-CLI.zip Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
binaries/dovi_tool.exe Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
binaries/mp4box.exe Normal file

Binary file not shown.

Binary file not shown.

BIN
binaries/mp4decrypt_1.exe Normal file

Binary file not shown.

1
binaries/mux_atmos.txt Normal file
View File

@ -0,0 +1 @@
ffmpeg -i correct_file.eac3 -map 0 -c:a copy correct_file.mp4

BIN
binaries/packager-old.exe Normal file

Binary file not shown.

Binary file not shown.

23
binary.txt Normal file
View File

@ -0,0 +1,23 @@
dl -al en -sl en --keys -q 2160 --cdm hisense_smarttv_he55a7000euwts_sl3000 -r HDR --selected -w S05E08-S05E24 AMZN -b CBR -vq UHD 0IQZZIJ6W6TT2CXPT6ZOZYX396
--include-data-files=/path/to/scan=folder_name=**/*.txt
--include-data-files=/path/to/file/*.txt=folder_name/some.txt
--include-data-files="./vinetrimmer/services/*.py"=vinetrimmer/services/
--onefile --> if this flag then figure out how to set the directories to NOT TEMP folder
python -m nuitka --onefile --assume-yes-for-downloads --windows-console-mode=disable --show-progress --standalone --output-dir=dist --static-libpython=no vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer --include-data-dir=./binaries/=binaries --include-data-dir=./scripts/=scripts
python -m nuitka --onefile --standalone --output-dir=dist vinetrimmer1.py --include-data-dir=./vinetrimmer/services/=vinetrimmer/services --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts
python -m nuitka --onefile --standalone --windows-console-mode=attach --output-dir=dist vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-dir=./vinetrimmer/services/*.py=vinetrimmer/services/=**/*.py --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/
python -m nuitka --onefile --standalone --windows-console-mode=attach --output-dir=dist vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-files=./vinetrimmer/services/*.py=vinetrimmer/services/ --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/
python -m nuitka --mode=standalone --output-dir=dist --windows-console-mode=force vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-files="./vinetrimmer/services/*.py"=vinetrimmer/services/ --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/
python -m nuitka --onefile --follow-imports --output-dir=dist --windows-console-mode=force vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-files="./vinetrimmer/services/*.py"=vinetrimmer/services/ --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/ --include-data-files="./vinetrimmer/config/*.py"=vinetrimmer/config/
python -m nuitka --onefile --follow-imports --output-dir=dist --standalone --clang --windows-console-mode=force --show-memory vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-files="./vinetrimmer/services/*.py"=vinetrimmer/services/ --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/ --include-data-files="./vinetrimmer/config/*.py"=vinetrimmer/config/
python -m nuitka --follow-imports --output-dir=dist --standalone --clang --windows-console-mode=force --show-memory vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-files="./vinetrimmer/services/*.py"=vinetrimmer/services/ --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/ --include-data-files="./vinetrimmer/config/*.py"=vinetrimmer/config/
python -m nuitka --onefile --follow-imports --output-dir=dist --standalone --clang --windows-console-mode=force --show-memory vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/ --include-data-dir=./vinetrimmer/config/=vinetrimmer/config/ --include-data-dir=./vinetrimmer/config/Services/=vinetrimmer/config/Services/ --include-data-dir=./scripts/=scripts/ --include-data-files="./vinetrimmer/config/*.py"=vinetrimmer/config/
nuitka --output-dir=dist --standalone --windows-console-mode=force vinetrimmer1.py --include-data-dir=./vinetrimmer/=vinetrimmer/
nuitka --onefile --output-dir=dist --windows-console-mode=force vt.py --include-data-dir=./vinetrimmer/=vinetrimmer/

View File

@ -1,42 +1,34 @@
https://www.primevideo.com/region/eu/storefront https://www.primevideo.com/region/eu/storefront
poetry run vt dl -al en -sl en -r HDR --list AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4
poetry run vt dl -al en -sl en --list AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4
poetry run vt dl -al en --selected --keys AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4
poetry run vt dl -al en --selected AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4
poetry run vt dl -al en -sl en -r HDR --list Amazon 0SGEGC629FCXQ5DJ9ORNE42PXK poetry run vt dl -q 2160 -al en -sl en --list AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4
poetry run vt dl -al en -sl en --list Amazon 0SGEGC629FCXQ5DJ9ORNE42PXK poetry run vt dl -q 2160 -al en -sl en --keys AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4 --bitrate CVBR+CBR
poetry run vt dl -al en --selected --keys Amazon 0SGEGC629FCXQ5DJ9ORNE42PXK poetry run vt dl -al en -sl en --selected AMZN -b CBR https://www.primevideo.com/detail/0I1GTXP9ZKTV7AAD7E1LCWJCUX/
poetry run vt dl -al en --selected Amazon 0SGEGC629FCXQ5DJ9ORNE42PXK
poetry run vt dl -q 2160 -al en -sl en -r HDR --list Amazon 0H7LY5ZKKBM1MIW0244WE9O2C4 poetry run vt dl -al en -sl en -q 2160 --keys -r HDR AMZN -b CBR 0OSAJR8S2YWRSQCYS4J8MEGEXI
poetry run vt dl -q 2160 -al en -sl en --selected --keys Amazon 0H7LY5ZKKBM1MIW0244WE9O2C4 poetry run vt dl -al en -sl en -q 2160 -r HDR --selected -w S05E08-S05E24 AMZN -b CBR 0IQZZIJ6W6TT2CXPT6ZOZYX396
poetry run vt dl -q 2160 -al en -sl en --keys --no-cache --vcodec H265 --selected AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4 --bitrate CBR
poetry run vt dl -q 2160 -al en -sl en --keys --no-cache --debug --vcodec H265 --selected AMZN 0H7LY5ZKKBM1MIW0244WE9O2C4 --bitrate CBR
python vinetrimmer1.py dl -al en -sl en -q 2160 -r HDR --selected -w S05E09-S05E24 AMZN -b CBR 0IQZZIJ6W6TT2CXPT6ZOZYX396
poetry run vt dl -al en -sl en --selected -q 2160 -r HDR -w S01E18-S01E25 AMZN -b CBR --ism 0IQZZIJ6W6TT2CXPT6ZOZYX396
http://ABHIRCQAAAAAAAAMCX3W7WLVKL54A.s3-bom-ww.cf.smooth.row.aiv-cdn.net/e5b0/2fe1/032c/4fae-b896-aca9d8bef3d4/170b36b1-856d-4c69-bbf6-feb6c979185a.ism/manifest http://ABHIRCQAAAAAAAAMCX3W7WLVKL54A.s3-bom-ww.cf.smooth.row.aiv-cdn.net/e5b0/2fe1/032c/4fae-b896-aca9d8bef3d4/170b36b1-856d-4c69-bbf6-feb6c979185a.ism/manifest
poetry run vt dl -al en -sl en -r HDR -w S01E01 --list --debug -q 2160 Amazon https://www.primevideo.com/detail/0HU52DR3U1R0FGI3KSUL00XYY7/ poetry run vt dl -al en -sl en -r HDR -w S01E01 --list -q 2160 AMZN https://www.primevideo.com/detail/0HU52DR3U1R0FGI3KSUL00XYY7
https://www.primevideo.com/detail/0HU52DR3U1R0FGI3KSUL00XYY7/ https://www.primevideo.com/detail/0HU52DR3U1R0FGI3KSUL00XYY7/
https://ABAKS6NAAAAAAAAMBIBDKKUP3ONNU.s3-iad-2.cf.smooth.row.aiv-cdn.net/357a/1bb0/c1f3/4a6b-b709-d6f2edf5b709/15eab8ec-d8ac-4c23-96fc-f5d89f459829.ism/manifest https://ABAKS6NAAAAAAAAMBIBDKKUP3ONNU.s3-iad-2.cf.smooth.row.aiv-cdn.net/357a/1bb0/c1f3/4a6b-b709-d6f2edf5b709/15eab8ec-d8ac-4c23-96fc-f5d89f459829.ism/manifest
http://ABHIRCQAAAAAAAAMHLTVNGLHRCITQ.s3-bom-ww.cf.smooth.row.aiv-cdn.net/e7ab/7c49/9743/4e53-ab5c-6d15516ecf15/52bf7e61-51cd-4e5d-bd68-834706f17789.ism/manifest http://ABHIRCQAAAAAAAAMHLTVNGLHRCITQ.s3-bom-ww.cf.smooth.row.aiv-cdn.net/e7ab/7c49/9743/4e53-ab5c-6d15516ecf15/52bf7e61-51cd-4e5d-bd68-834706f17789.ism/manifest
https://www.primevideo.com/region/eu/detail/0KYRVT4JDB957NXZO72E2MIFW5/ https://www.primevideo.com/region/eu/detail/0KYRVT4JDB957NXZO72E2MIFW5/
https://m-5884s3.ll.smooth.row.aiv-cdn.net/iad_2/3572/bbdc/73b4/404d-a100-802b1d9de4c6/862e2506-c20e-4ba7-bacc-d6b4775e7b62.ism/manifest https://m-5884s3.ll.smooth.row.aiv-cdn.net/iad_2/3572/bbdc/73b4/404d-a100-802b1d9de4c6/862e2506-c20e-4ba7-bacc-d6b4775e7b62.ism/manifest
Max show
poetry run vt dl -al en -sl en -w S01E01 Max https://play.max.com/show/c8ea8e19-cae7-4683-9b62-cdbbed744784 poetry run vt dl -al en -sl en -w S01E01 Max https://play.max.com/show/c8ea8e19-cae7-4683-9b62-cdbbed744784
UHD UHD
poetry run vt dl -al en -sl en -v H265 --keys Max https://play.max.com/show/5756c2bf-36f8-4890-b1f9-ef168f1d8e9c poetry run vt dl -al en -sl en --keys Max https://play.max.com/show/5756c2bf-36f8-4890-b1f9-ef168f1d8e9c
poetry run vt dl -al en -sl en -w S02E05-S02E10 --selected --proxy http://192.168.0.99:9766 Max poetry run vt dl -al en -sl en -w S02E05-S02E10 --selected --proxy http://192.168.0.99:9766 Max
poetry run vt dl -al en -sl en -v H265 --list -w S01E01 --proxy http://192.168.0.99:9766 Max poetry run vt dl -al en -sl en --list -w S01E01 --proxy http://192.168.0.99:9766 Max
poetry run vt dl -al all --selected --proxy http://192.168.0.99:9766 --debug -w S01E01 ATVP umc.cmc.7gvn6fekgfpq5fc72pgi1c47o poetry run vt dl -al all --selected --proxy http://192.168.0.99:9766 --debug -w S01E01 ATVP umc.cmc.7gvn6fekgfpq5fc72pgi1c47o
poetry run vt dl -al en -sl en --selected --debug -q 720 --proxy http://192.168.0.99:9766 -w S01E01 ATVP umc.cmc.1nfdfd5zlk05fo1bwwetzldy3 poetry run vt dl -al en -sl en --selected --debug -q 720 --proxy http://192.168.0.99:9766 -w S01E01 ATVP umc.cmc.1nfdfd5zlk05fo1bwwetzldy3
poetry run vt dl -al en -sl en --selected --proxy http://192.168.0.99:9766 -w S01E01 ATVP umc.cmc.1nfdfd5zlk05fo1bwwetzldy3
poetry run vt dl -al en -sl en --cdm hisense_smarttv_hu50a6100uw_sl3000 --selected --proxy http://192.168.0.99:9766 --keys -w S01E02 ATVP umc.cmc.1nfdfd5zlk05fo1bwwetzldy3
poetry run vt dl -al en -sl en --cdm hisense_smarttv_hu50a6100uw_sl3000 --selected --proxy http://192.168.0.99:9766 --keys -q 2160 ATVP umc.cmc.apzybj6eqf6pzccd97kev7bs

54
fix.txt Normal file
View File

@ -0,0 +1,54 @@
D:\PlayReady-Amazon-Tool-main>poetry run vt dl -al en -sl en --selected --keys --cdm hisense_smarttv_he55a7000euwts_sl3000 AMZN -vq UHD -b CVBR+CBR https://www.primevideo.com/detail/0I1GTXP9ZKTV7AAD7E1LCWJCUX/
2025-02-07 22:26:57 [I] vt : vinetrimmer - Widevine DRM downloader and decrypter
2025-02-07 22:26:57 [I] vt : [Root Config] : D:\PlayReady-Amazon-Tool-main\vinetrimmer\vinetrimmer.yml
2025-02-07 22:26:57 [I] vt : [Service Configs] : D:\PlayReady-Amazon-Tool-main\vinetrimmer\Services
2025-02-07 22:26:57 [I] vt : [Cookies] : D:\PlayReady-Amazon-Tool-main\vinetrimmer\Cookies
2025-02-07 22:26:57 [I] vt : [CDM Devices] : D:\PlayReady-Amazon-Tool-main\vinetrimmer\devices
2025-02-07 22:26:57 [I] vt : [Cache] : D:\PlayReady-Amazon-Tool-main\vinetrimmer\Cache
2025-02-07 22:26:57 [I] vt : [Logs] : D:\PlayReady-Amazon-Tool-main\vinetrimmer\Logs
2025-02-07 22:26:57 [I] vt : [Temp Files] : D:\PlayReady-Amazon-Tool-main\Temp
2025-02-07 22:26:57 [I] vt : [Downloads] : D:\PlayReady-Amazon-Tool-main\Downloads
2025-02-07 22:26:57 [I] dl : + 1 Local Vault
2025-02-07 22:26:57 [I] dl : + 0 Remote Vaults
2025-02-07 22:26:57 [I] dl : + Loaded Device: hisense_smarttv_he55a7000euwts_sl3000 (L3000)
2025-02-07 22:26:57 [I] AMZN : Getting Account Region
2025-02-07 22:26:59 [I] AMZN : + Region: us
2025-02-07 22:26:59 [I] AMZN : + Using cached device bearer
2025-02-07 22:26:59 [I] AMZN : Retrieving Titles
2025-02-07 22:27:00 [I] Titles : Title: I Was Not Ready Da
2025-02-07 22:27:00 [I] AMZN : Getting tracks for I Was Not Ready Da (2020) [amzn1.dv.gti.30baee18-aa4c-1fc2-72cc-6e11d5e627d9]
2025-02-07 22:27:01 [I] AMZN : + Detected encodingVersion=2
2025-02-07 22:27:01 [I] AMZN : + Downloading CVBR MPD
2025-02-07 22:27:02 [I] AMZN : + Detected encodingVersion=2
2025-02-07 22:27:02 [I] AMZN : + Downloading CBR MPD
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1082, in main
rv = self.invoke(ctx)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "D:\PlayReady-Amazon-Tool-main\vinetrimmer\vinetrimmer.py", line 72, in main
dl()
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1082, in main
rv = self.invoke(ctx)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 1666, in _process_result
value = ctx.invoke(self._result_callback, value, **ctx.params)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "D:\PlayReady-Amazon-Tool-main\.venv\lib\site-packages\click\decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
File "D:\PlayReady-Amazon-Tool-main\vinetrimmer\commands\dl.py", line 309, in result
title.tracks.add(service.get_tracks(title), warn_only=True)
File "D:\PlayReady-Amazon-Tool-main\vinetrimmer\services\amazon.py", line 321, in get_tracks
manifest, chosen_manifest, tracks = self.get_best_quality(title)
File "D:\PlayReady-Amazon-Tool-main\vinetrimmer\services\amazon.py", line 1051, in get_best_quality
best_quality = max(track_list, key=lambda x: x['max_size'])
TypeError: '>' not supported between instances of 'NoneType' and 'NoneType'

View File

@ -1,5 +1,6 @@
@echo off @echo off
python -m pip install poetry python -m pip install poetry==1.8.5
poetry config virtualenvs.in-project true poetry config virtualenvs.in-project true
poetry lock --no-update
poetry install poetry install
pause pause

25
install.sh Normal file
View File

@ -0,0 +1,25 @@
git clone -b dev --single-branch https://github.com/chu23465/VT-PR
cd VT-PR
python -m pip install poetry==1.8.5
poetry config virtualenvs.in-project true
poetry lock --no-update
poetry install
sudo add-apt-repository ppa:ubuntuhandbook1/apps -y
sudo apt update
sudo apt-get install ffmpeg aria2 mkvtoolnix libmediainfo0v5
ffmpeg --version
ffprobe --version
ffplay --version
aria2c --version
mkvmerge --version
rm -r ./binaries/
mkdir ./binaries/
mv -v ./linux_binaries/* ./binaries/
which aria2c | xargs -I{} cp {} ./binaries/
which ffmpeg | xargs -I{} cp {} ./binaries/
which ffprobe | xargs -I{} cp {} ./binaries/
which ffplay | xargs -I{} cp {} ./binaries/
which mkvmerge | xargs -I{} cp {} ./binaries/
cd ./binaries/
find . -type f -print0 | xargs -0 chmod +x
chmod +x *

BIN
linux_binaries/N_m3u8DL-RE Normal file

Binary file not shown.

BIN
linux_binaries/dovi_tool Normal file

Binary file not shown.

Binary file not shown.

BIN
linux_binaries/mp4decrypt Normal file

Binary file not shown.

BIN
linux_binaries/mp4dump Normal file

Binary file not shown.

BIN
linux_binaries/packager Normal file

Binary file not shown.

14
make.ps1 Normal file
View File

@ -0,0 +1,14 @@
# Tip: add argument `run` to directly run after build for fast testing
Write-Output 'Creating Python Wheel package via Poetry'
& 'poetry' build -f wheel
Write-Output 'Building to self-contained folder/app via PyInstaller'
& 'poetry' run python pyinstaller.py
if ($args[0] -eq 'run') {
& 'dist/vinetrimmer/vinetrimmer.exe' ($args | Select-Object -Skip 1)
exit
}
Write-Output 'Done! See /dist for output files.'

17
make.sh Normal file
View File

@ -0,0 +1,17 @@
#!/bin/sh
# Tip: add argument `run` to directly run after build for fast testing
echo 'Creating Python Wheel package via Poetry'
poetry build -f wheel
echo 'Building to self-contained folder/app via PyInstaller'
poetry run python pyinstaller.py
if [ "$1" = 'run' ]; then
shift
./dist/vinetrimmer/vinetrimmer "$@"
exit
fi
echo 'Done! See /dist for output files.'

2575
poetry.lock generated

File diff suppressed because it is too large Load Diff

96
pyinstaller.py Normal file
View File

@ -0,0 +1,96 @@
#!/usr/bin/env python3
import itertools
import os
import shutil
import sys
import toml
from PyInstaller.__main__ import run
if sys.platform == "win32":
from PyInstaller.utils.win32.versioninfo import (FixedFileInfo, StringFileInfo, StringStruct,
StringTable, VarFileInfo, VarStruct, VSVersionInfo)
#from PyInstaller.utils.win32.versioninfo import SetVersion
SCRIPT_PATH = os.path.dirname(os.path.realpath(__file__))
"""Load pyproject.toml information."""
project = toml.load(os.path.join(SCRIPT_PATH, "pyproject.toml"))
poetry = project["tool"]["poetry"]
"""Configuration options that may be changed or referenced often."""
DEBUG = False # When False, removes un-needed data after build has finished
NAME = poetry["name"]
AUTHOR = "vinetrimmer contributors"
VERSION = poetry["version"]
ICON_FILE = "assets/icon.ico" # pass None to use default icon
ONE_FILE = False # Must be False if using setup.iss
CONSOLE = True # If build is intended for GUI, set to False
ADDITIONAL_DATA = [
# (local file path, destination in build output)
]
HIDDEN_IMPORTS = []
EXTRA_ARGS = [
"-y", "--win-private-assemblies", "--win-no-prefer-redirects"
]
"""Prepare environment to ensure output data is fresh."""
shutil.rmtree("build", ignore_errors=True)
shutil.rmtree("dist/vinetrimmer", ignore_errors=True)
# we don't want to use any spec, only the configuration set in this file
try:
os.unlink(f"{NAME}.spec")
except FileNotFoundError:
pass
"""Run PyInstaller with the provided configuration."""
run([
"vinetrimmer/vinetrimmer.py",
"-n", NAME,
"-i", ["NONE", ICON_FILE][bool(ICON_FILE)],
["-D", "-F"][ONE_FILE],
["-w", "-c"][CONSOLE],
*itertools.chain(*[["--add-data", os.pathsep.join(x)] for x in ADDITIONAL_DATA]),
*itertools.chain(*[["--hidden-import", x] for x in HIDDEN_IMPORTS]),
*EXTRA_ARGS
])
if sys.platform == "win32":
"""Set Version Info Structure."""
VERSION_4_TUP = tuple(map(int, f"{VERSION}.0".split(".")))
VERSION_4_STR = ".".join(map(str, VERSION_4_TUP))
#SetVersion(
# "dist/{0}/{0}.exe".format(NAME),
# VSVersionInfo(
# ffi=FixedFileInfo(
# filevers=VERSION_4_TUP,
# prodvers=VERSION_4_TUP
# ),
# kids=[
# StringFileInfo([StringTable(
# "040904B0", # ?
# [
# StringStruct("Comments", NAME),
# StringStruct("CompanyName", AUTHOR),
# StringStruct("FileDescription", "Widevine DRM downloader and decrypter"),
# StringStruct("FileVersion", VERSION_4_STR),
# StringStruct("InternalName", NAME),
# StringStruct("LegalCopyright", f"Copyright (C) 2019-2021 {AUTHOR}"),
# StringStruct("OriginalFilename", ""),
# StringStruct("ProductName", NAME),
# StringStruct("ProductVersion", VERSION_4_STR)
# ]
# )]),
# VarFileInfo([VarStruct("Translation", [0, 1200])]) # ?
# ]
# )
#)
if not DEBUG:
shutil.rmtree("build", ignore_errors=True)
# we don't want to keep the generated spec
try:
os.unlink(f"{NAME}.spec")
except FileNotFoundError:
pass

View File

@ -5,17 +5,17 @@ build-backend = 'poetry.core.masonry.api'
[tool.poetry] [tool.poetry]
name = 'vinetrimmer' name = 'vinetrimmer'
version = '0.1.0' version = '0.1.0'
description = 'Playready DRM downloader and decrypter' description = 'Widevine and Playready DRM downloader and decrypter'
authors = [] authors = []
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = "^3.8" python = ">=3.10,<3.13"
appdirs = "^1.4.4" appdirs = "^1.4.4"
beautifulsoup4 = "~4.8.2" beautifulsoup4 = "^4.8.2"
click = "^8.0.1" click = "^8.0.1"
cffi = "^1.16.0" cffi = "^1.16.0"
coloredlogs = "^15.0" coloredlogs = "^15.0"
construct = "2.10.70" construct = "2.8.8"
crccheck = "^1.0" crccheck = "^1.0"
cryptography = "^43.0.3" cryptography = "^43.0.3"
ecpy = "^1.2.5" ecpy = "^1.2.5"
@ -23,32 +23,40 @@ httpx = "^0.23.0"
isodate = "^0.6.1" isodate = "^0.6.1"
jsonpickle = "^2.0.0" jsonpickle = "^2.0.0"
langcodes = { extras = ["data"], version = "^3.1.0" } langcodes = { extras = ["data"], version = "^3.1.0" }
lxml = "^4.6.3" lxml = "^5.3.0"
m3u8 = "^0.9.0" m3u8 = { path = "./scripts/m3u8", develop = true }
marisa-trie = "^1.1.0" marisa-trie = "^1.1.0"
poetry = "1.8.5"
pproxy = "^2.7.7" pproxy = "^2.7.7"
protobuf = "^3.13.0" protobuf3 = { path = "./scripts/protobuf3", develop = true }
pycountry = "^24.6.1"
pycaption = "^2.1.1" pycaption = "^2.1.1"
pycryptodome = "^3.21.0" pycryptodome = "^3.21.0"
pycryptodomex = "^3.4.3" pycryptodomex = "^3.4.3"
pyhulu = "^1.1.2" pyhulu = "^1.1.2"
pymediainfo = "^5.0.3" pymediainfo = "^5.0.3"
PyMySQL = { extras = ["rsa"], version = "^1.0.2" } PyMySQL = { extras = ["rsa"], version = "^1.0.2" }
pymp4 = "^1.4.0"
pyplayready = { path = "./scripts/pyplayready", develop = true }
pywidevine = { path = "./scripts/pywidevine", develop = true }
pysubs2 = "^1.6.1" pysubs2 = "^1.6.1"
PyYAML = "^6.0.1" PyYAML = "^6.0.1"
requests = { extras = ["socks"], version = "2.29.0" } requests = { extras = ["socks"], version = "2.32.3" }
subby = { path = "./scripts/subby", develop = true }
tldextract = "^3.1.0" tldextract = "^3.1.0"
toml = "^0.10.2" toml = "^0.10.2"
tqdm = "^4.67.0"
Unidecode = "^1.2.0" Unidecode = "^1.2.0"
validators = "^0.18.2" validators = "^0.18.2"
websocket-client = "^1.1.0" websocket-client = "^1.1.0"
xmltodict = "^0.13.0" xmltodict = "^0.14.2"
yt-dlp = "^2022.11.11" yt-dlp = "^2024.11.11"
ushlex = "^0.99.1"
[tool.poetry.dev-dependencies] [tool.poetry.group.dev.dependencies]
flake8 = "^3.8.4" flake8 = "^3.8.4"
isort = "^5.9.2" isort = "^5.9.2"
pyinstaller = "^4.4" pyinstaller = "5.13.2"
[tool.poetry.scripts] [tool.poetry.scripts]
vt = 'vinetrimmer.vinetrimmer:main' vt = 'vinetrimmer.vinetrimmer:main'

View File

@ -1,5 +0,0 @@
requests
pycryptodome
ecpy
construct
click

View File

@ -13,13 +13,13 @@ http.headers.update({
}) })
# get player fragment page # get player fragment page
fragment = http.get(sys.argv[1].replace("/videos/", "/player5_fragment/")).text fragment = http.get(sys.argv[1].replace("/videos/", "/player5_fragment/")).text
# get encrypted manifest urls for both hls and dash # get encrypted manifest.xml urls for both hls and dash
encrypted_manifests = {k: bytes.fromhex(re.findall( encrypted_manifests = {k: bytes.fromhex(re.findall(
r'<source\s+type="application/' + v + r'"\s+src=".+?/e-stream-url\?stream=(.+?)"', r'<source\s+type="application/' + v + r'"\s+src=".+?/e-stream-url\?stream=(.+?)"',
fragment fragment
)[0][0]) for k, v in {"hls": "x-mpegURL", "dash": r"dash\+xml"}.items()} )[0][0]) for k, v in {"hls": "x-mpegURL", "dash": r"dash\+xml"}.items()}
# decrypt all manifest urls in manifests # decrypt all manifest.xml urls in manifests
m = re.search(r"^\s*chabi:\s*'(.+?)'", fragment, re.MULTILINE) m = re.search(r"^\s*chabi:\s*'(.+?)'", fragment, re.MULTILINE)
if not m: if not m:
raise ValueError("Unable to get key") raise ValueError("Unable to get key")

19
scripts/dsnp_kid_fix.py Normal file
View File

@ -0,0 +1,19 @@
import uuid
import base64
import xmltodict
psshPR = """
xAEAAAEAAQC6ATwAVwBSAE0ASABFAEEARABFAFIAIAB4AG0AbABuAHMAPQAiAGgAdAB0AHAAOgAvAC8AcwBjAGgAZQBtAGEAcwAuAG0AaQBjAHIAbwBzAG8AZgB0AC4AYwBvAG0ALwBEAFIATQAvADIAMAAwADcALwAwADMALwBQAGwAYQB5AFIAZQBhAGQAeQBIAGUAYQBkAGUAcgAiACAAdgBlAHIAcwBpAG8AbgA9ACIANAAuADAALgAwAC4AMAAiAD4APABEAEEAVABBAD4APABQAFIATwBUAEUAQwBUAEkATgBGAE8APgA8AEsARQBZAEwARQBOAD4AMQA2ADwALwBLAEUAWQBMAEUATgA+ADwAQQBMAEcASQBEAD4AQQBFAFMAQwBUAFIAPAAvAEEATABHAEkARAA+ADwALwBQAFIATwBUAEUAQwBUAEkATgBGAE8APgA8AEsASQBEAD4ATAA0AGkAWQBTAHIAaQB2AGEARQAyAFQASwBHAFAAZQBlADkAYgB1AGcAZwA9AD0APAAvAEsASQBEAD4APAAvAEQAQQBUAEEAPgA8AC8AVwBSAE0ASABFAEEARABFAFIAPgA=
"""
xml_str = base64.b64decode(psshPR).decode("utf-16-le", "ignore")
xml_str = xml_str[xml_str.index("<"):]
kids = []
try:
kids = [uuid.UUID(base64.b64decode(kid_xml['@VALUE']).hex()).bytes_le.hex().upper() for kid_xml in xmltodict.parse(xml_str)['WRMHEADER']['DATA']['CUSTOMATTRIBUTES']['KIDS']['KID']]
except:
another_kid = uuid.UUID(base64.b64decode(xmltodict.parse(xml_str)['WRMHEADER']['DATA']["KID"]).hex()).bytes_le.hex().upper()
if another_kid not in kids:
kids.append(another_kid.upper())
print(kids)

View File

@ -0,0 +1,14 @@
# https://editorconfig.org
root = true
[*.py]
charset = utf-8
indent_style = space
indent_size = 4
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[Makefile]
indent_style = tab
indent_size = 4

40
scripts/m3u8/.github/workflows/main.yml vendored Normal file
View File

@ -0,0 +1,40 @@
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
strategy:
# You can use PyPy versions in python-version.
# For example, pypy2 and pypy3
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
# Runs a single command using the runners shell
- name: Run all tests
run: ./runtests

19
scripts/m3u8/.github/workflows/ruff.yml vendored Normal file
View File

@ -0,0 +1,19 @@
name: Ruff
run-name: Ruff
on: [ push, pull_request ]
jobs:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/ruff-action@v1
ruff_format:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/ruff-action@v1
with:
args: format --check --diff

17
scripts/m3u8/.gitignore vendored Normal file
View File

@ -0,0 +1,17 @@
*.pyc
*.egg-info
tests/server.stdout
dist/
build/
bin/
include/
lib/
lib64/
local/
.coverage
.cache
.python-version
.idea/
.vscode/
venv/
pyvenv.cfg

11
scripts/m3u8/LICENSE Normal file
View File

@ -0,0 +1,11 @@
m3u8 is licensed under the MIT License:
The MIT License
Copyright (c) 2012 globo.com webmedia@corp.globo.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

3
scripts/m3u8/MANIFEST.in Normal file
View File

@ -0,0 +1,3 @@
include requirements.txt
include LICENSE
include README.md

104
scripts/m3u8/README.md Normal file
View File

@ -0,0 +1,104 @@
![image](https://github.com/globocom/m3u8/actions/workflows/main.yml/badge.svg) [![image](https://badge.fury.io/py/m3u8.svg)](https://badge.fury.io/py/m3u8)
# m3u8
Python [m3u8](https://tools.ietf.org/html/rfc8216) parser.
# Documentation
## Loading a playlist
To load a playlist into an object from uri, file path or directly from
string, use the `load/loads` functions:
```python
import m3u8
playlist = m3u8.load('http://videoserver.com/playlist.m3u8') # this could also be an absolute filename
print(playlist.segments)
print(playlist.target_duration)
# if you already have the content as string, use
playlist = m3u8.loads('#EXTM3U8 ... etc ... ')
```
## Dumping a playlist
To dump a playlist from an object to the console or a file, use the
`dump/dumps` functions:
``` python
import m3u8
playlist = m3u8.load('http://videoserver.com/playlist.m3u8')
print(playlist.dumps())
# if you want to write a file from its content
playlist.dump('playlist.m3u8')
```
# Supported tags
- [\#EXT-X-TARGETDURATION](https://tools.ietf.org/html/rfc8216#section-4.3.3.1)
- [\#EXT-X-MEDIA-SEQUENCE](https://tools.ietf.org/html/rfc8216#section-4.3.3.2)
- [\#EXT-X-DISCONTINUITY-SEQUENCE](https://tools.ietf.org/html/rfc8216#section-4.3.3.3)
- [\#EXT-X-PROGRAM-DATE-TIME](https://tools.ietf.org/html/rfc8216#section-4.3.2.6)
- [\#EXT-X-MEDIA](https://tools.ietf.org/html/rfc8216#section-4.3.4.1)
- [\#EXT-X-PLAYLIST-TYPE](https://tools.ietf.org/html/rfc8216#section-4.3.3.5)
- [\#EXT-X-KEY](https://tools.ietf.org/html/rfc8216#section-4.3.2.4)
- [\#EXT-X-STREAM-INF](https://tools.ietf.org/html/rfc8216#section-4.3.4.2)
- [\#EXT-X-VERSION](https://tools.ietf.org/html/rfc8216#section-4.3.1.2)
- [\#EXT-X-ALLOW-CACHE](https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07#section-3.3.6)
- [\#EXT-X-ENDLIST](https://tools.ietf.org/html/rfc8216#section-4.3.3.4)
- [\#EXTINF](https://tools.ietf.org/html/rfc8216#section-4.3.2.1)
- [\#EXT-X-I-FRAMES-ONLY](https://tools.ietf.org/html/rfc8216#section-4.3.3.6)
- [\#EXT-X-BITRATE](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.4.8)
- [\#EXT-X-BYTERANGE](https://tools.ietf.org/html/rfc8216#section-4.3.2.2)
- [\#EXT-X-I-FRAME-STREAM-INF](https://tools.ietf.org/html/rfc8216#section-4.3.4.3)
- [\#EXT-X-IMAGES-ONLY](https://github.com/image-media-playlist/spec/blob/master/image_media_playlist_v0_4.pdf)
- [\#EXT-X-IMAGE-STREAM-INF](https://github.com/image-media-playlist/spec/blob/master/image_media_playlist_v0_4.pdf)
- [\#EXT-X-TILES](https://github.com/image-media-playlist/spec/blob/master/image_media_playlist_v0_4.pdf)
- [\#EXT-X-DISCONTINUITY](https://tools.ietf.org/html/rfc8216#section-4.3.2.3)
- \#EXT-X-CUE-OUT
- \#EXT-X-CUE-OUT-CONT
- \#EXT-X-CUE-IN
- \#EXT-X-CUE-SPAN
- \#EXT-OATCLS-SCTE35
- [\#EXT-X-INDEPENDENT-SEGMENTS](https://tools.ietf.org/html/rfc8216#section-4.3.5.1)
- [\#EXT-X-MAP](https://tools.ietf.org/html/rfc8216#section-4.3.2.5)
- [\#EXT-X-START](https://tools.ietf.org/html/rfc8216#section-4.3.5.2)
- [\#EXT-X-SERVER-CONTROL](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.3.8)
- [\#EXT-X-PART-INF](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.3.7)
- [\#EXT-X-PART](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.4.9)
- [\#EXT-X-RENDITION-REPORT](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.5.4)
- [\#EXT-X-SKIP](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis#section-4.4.5.2)
- [\#EXT-X-SESSION-DATA](https://tools.ietf.org/html/rfc8216#section-4.3.4.4)
- [\#EXT-X-PRELOAD-HINT](https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis-09#section-4.4.5.3)
- [\#EXT-X-SESSION-KEY](https://tools.ietf.org/html/rfc8216#section-4.3.4.5)
- [\#EXT-X-DATERANGE](https://tools.ietf.org/html/rfc8216#section-4.3.2.7)
- [\#EXT-X-GAP](https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-05#section-4.4.2.7)
- [\#EXT-X-CONTENT-STEERING](https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-10#section-4.4.6.64)
# Frequently Asked Questions
- [FAQ](https://github.com/globocom/m3u8/wiki/FAQ)
# Running Tests
``` bash
$ ./runtests
```
# Contributing
All contributions are welcome, but we will merge a pull request if, and
only if, it
- Has tests
- Follows the code conventions
If you plan to implement a new feature or something that will take more
than a few minutes, please open an issue to make sure we don't work on
the same thing.

View File

@ -0,0 +1,105 @@
# Copyright 2014 Globo.com Player authors. All rights reserved.
# Use of this source code is governed by a MIT License
# license that can be found in the LICENSE file.
import os
from urllib.parse import urljoin, urlsplit
from m3u8.httpclient import DefaultHTTPClient
from m3u8.model import (
M3U8,
ContentSteering,
DateRange,
DateRangeList,
IFramePlaylist,
ImagePlaylist,
Key,
Media,
MediaList,
PartialSegment,
PartialSegmentList,
PartInformation,
Playlist,
PlaylistList,
PreloadHint,
RenditionReport,
RenditionReportList,
Segment,
SegmentList,
ServerControl,
Skip,
Start,
Tiles,
)
from m3u8.parser import ParseError, parse
__all__ = (
"M3U8",
"Segment",
"SegmentList",
"PartialSegment",
"PartialSegmentList",
"Key",
"Playlist",
"IFramePlaylist",
"Media",
"MediaList",
"PlaylistList",
"Start",
"RenditionReport",
"RenditionReportList",
"ServerControl",
"Skip",
"PartInformation",
"PreloadHint",
"DateRange",
"DateRangeList",
"ContentSteering",
"ImagePlaylist",
"Tiles",
"loads",
"load",
"parse",
"ParseError",
)
def loads(content, uri=None, custom_tags_parser=None):
"""
Given a string with a m3u8 content, returns a M3U8 object.
Optionally parses a uri to set a correct base_uri on the M3U8 object.
Raises ValueError if invalid content
"""
if uri is None:
return M3U8(content, custom_tags_parser=custom_tags_parser)
else:
base_uri = urljoin(uri, ".")
return M3U8(content, base_uri=base_uri, custom_tags_parser=custom_tags_parser)
def load(
uri,
timeout=None,
headers={},
custom_tags_parser=None,
http_client=DefaultHTTPClient(),
verify_ssl=True,
):
"""
Retrieves the content from a given URI and returns a M3U8 object.
Raises ValueError if invalid content or IOError if request fails.
"""
base_uri_parts = urlsplit(uri)
if base_uri_parts.scheme and base_uri_parts.netloc:
content, base_uri = http_client.download(uri, timeout, headers, verify_ssl)
return M3U8(content, base_uri=base_uri, custom_tags_parser=custom_tags_parser)
else:
return _load_from_file(uri, custom_tags_parser)
def _load_from_file(uri, custom_tags_parser=None):
with open(uri, encoding="utf8") as fileobj:
raw_content = fileobj.read().strip()
base_uri = os.path.dirname(uri)
return M3U8(raw_content, base_uri=base_uri, custom_tags_parser=custom_tags_parser)

View File

@ -0,0 +1,36 @@
import gzip
import ssl
import urllib.request
from urllib.parse import urljoin
class DefaultHTTPClient:
def __init__(self, proxies=None):
self.proxies = proxies
def download(self, uri, timeout=None, headers={}, verify_ssl=True):
proxy_handler = urllib.request.ProxyHandler(self.proxies)
https_handler = HTTPSHandler(verify_ssl=verify_ssl)
opener = urllib.request.build_opener(proxy_handler, https_handler)
opener.addheaders = headers.items()
resource = opener.open(uri, timeout=timeout)
base_uri = urljoin(resource.geturl(), ".")
if resource.info().get("Content-Encoding") == "gzip":
content = gzip.decompress(resource.read()).decode(
resource.headers.get_content_charset(failobj="utf-8")
)
else:
content = resource.read().decode(
resource.headers.get_content_charset(failobj="utf-8")
)
return content, base_uri
class HTTPSHandler:
def __new__(self, verify_ssl=True):
context = ssl.create_default_context()
if not verify_ssl:
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
return urllib.request.HTTPSHandler(context=context)

View File

@ -0,0 +1,52 @@
from os.path import dirname
from urllib.parse import urljoin, urlsplit
class BasePathMixin:
@property
def absolute_uri(self):
if self.uri is None:
return None
ret = urljoin(self.base_uri, self.uri)
if self.base_uri:
base_uri_parts = urlsplit(self.base_uri)
if (not base_uri_parts.scheme) and (not base_uri_parts.netloc):
return ret
if not urlsplit(ret).scheme:
raise ValueError("There can not be `absolute_uri` with no `base_uri` set")
return ret
@property
def base_path(self):
if self.uri is None:
return None
return dirname(self.get_path_from_uri())
def get_path_from_uri(self):
"""Some URIs have a slash in the query string."""
return self.uri.split("?")[0]
@base_path.setter
def base_path(self, newbase_path):
if self.uri is not None:
if not self.base_path:
self.uri = f"{newbase_path}/{self.uri}"
else:
self.uri = self.uri.replace(self.base_path, newbase_path)
class GroupedBasePathMixin:
def _set_base_uri(self, new_base_uri):
for item in self:
item.base_uri = new_base_uri
base_uri = property(None, _set_base_uri)
def _set_base_path(self, newbase_path):
for item in self:
item.base_path = newbase_path
base_path = property(None, _set_base_path)

1673
scripts/m3u8/m3u8/model.py Normal file

File diff suppressed because it is too large Load Diff

795
scripts/m3u8/m3u8/parser.py Normal file
View File

@ -0,0 +1,795 @@
# Copyright 2014 Globo.com Player authors. All rights reserved.
# Use of this source code is governed by a MIT License
# license that can be found in the LICENSE file.
import itertools
import re
from datetime import datetime, timedelta
try:
from backports.datetime_fromisoformat import MonkeyPatch
MonkeyPatch.patch_fromisoformat()
except ImportError:
pass
from m3u8 import protocol, version_matching
"""
http://tools.ietf.org/html/draft-pantos-http-live-streaming-08#section-3.2
http://stackoverflow.com/questions/2785755/how-to-split-but-ignore-separators-in-quoted-strings-in-python
"""
ATTRIBUTELISTPATTERN = re.compile(r"""((?:[^,"']|"[^"]*"|'[^']*')+)""")
def cast_date_time(value):
return datetime.fromisoformat(value)
def format_date_time(value, **kwargs):
return value.isoformat(**kwargs)
class ParseError(Exception):
def __init__(self, lineno, line):
self.lineno = lineno
self.line = line
def __str__(self):
return "Syntax error in manifest on line %d: %s" % (self.lineno, self.line)
def parse(content, strict=False, custom_tags_parser=None):
"""
Given a M3U8 playlist content returns a dictionary with all data found
"""
data = {
"media_sequence": 0,
"is_variant": False,
"is_endlist": False,
"is_i_frames_only": False,
"is_independent_segments": False,
"is_images_only": False,
"playlist_type": None,
"playlists": [],
"segments": [],
"iframe_playlists": [],
"image_playlists": [],
"tiles": [],
"media": [],
"keys": [],
"rendition_reports": [],
"skip": {},
"part_inf": {},
"session_data": [],
"session_keys": [],
"segment_map": [],
}
state = {
"expect_segment": False,
"expect_playlist": False,
"current_key": None,
"current_segment_map": None,
}
lines = string_to_lines(content)
if strict:
found_errors = version_matching.validate(lines)
if len(found_errors) > 0:
raise Exception(found_errors)
for lineno, line in enumerate(lines, 1):
line = line.strip()
parse_kwargs = {
"line": line,
"lineno": lineno,
"data": data,
"state": state,
"strict": strict,
}
# Call custom parser if needed
if line.startswith("#") and callable(custom_tags_parser):
go_to_next_line = custom_tags_parser(line, lineno, data, state)
# Do not try to parse other standard tags on this line if custom_tags_parser
# function returns `True`
if go_to_next_line:
continue
if line.startswith(protocol.ext_x_byterange):
_parse_byterange(**parse_kwargs)
continue
elif line.startswith(protocol.ext_x_bitrate):
_parse_bitrate(**parse_kwargs)
elif line.startswith(protocol.ext_x_targetduration):
_parse_targetduration(**parse_kwargs)
elif line.startswith(protocol.ext_x_media_sequence):
_parse_media_sequence(**parse_kwargs)
elif line.startswith(protocol.ext_x_discontinuity_sequence):
_parse_discontinuity_sequence(**parse_kwargs)
elif line.startswith(protocol.ext_x_program_date_time):
_parse_program_date_time(**parse_kwargs)
elif line.startswith(protocol.ext_x_discontinuity):
_parse_discontinuity(**parse_kwargs)
elif line.startswith(protocol.ext_x_cue_out_cont):
_parse_cueout_cont(**parse_kwargs)
elif line.startswith(protocol.ext_x_cue_out):
_parse_cueout(**parse_kwargs)
elif line.startswith(f"{protocol.ext_oatcls_scte35}:"):
_parse_oatcls_scte35(**parse_kwargs)
elif line.startswith(f"{protocol.ext_x_asset}:"):
_parse_asset(**parse_kwargs)
elif line.startswith(protocol.ext_x_cue_in):
_parse_cue_in(**parse_kwargs)
elif line.startswith(protocol.ext_x_cue_span):
_parse_cue_span(**parse_kwargs)
elif line.startswith(protocol.ext_x_version):
_parse_version(**parse_kwargs)
elif line.startswith(protocol.ext_x_allow_cache):
_parse_allow_cache(**parse_kwargs)
elif line.startswith(protocol.ext_x_key):
_parse_key(**parse_kwargs)
elif line.startswith(protocol.extinf):
_parse_extinf(**parse_kwargs)
elif line.startswith(protocol.ext_x_stream_inf):
_parse_stream_inf(**parse_kwargs)
elif line.startswith(protocol.ext_x_i_frame_stream_inf):
_parse_i_frame_stream_inf(**parse_kwargs)
elif line.startswith(protocol.ext_x_media):
_parse_media(**parse_kwargs)
elif line.startswith(protocol.ext_x_playlist_type):
_parse_playlist_type(**parse_kwargs)
elif line.startswith(protocol.ext_i_frames_only):
_parse_i_frames_only(**parse_kwargs)
elif line.startswith(protocol.ext_is_independent_segments):
_parse_is_independent_segments(**parse_kwargs)
elif line.startswith(protocol.ext_x_endlist):
_parse_endlist(**parse_kwargs)
elif line.startswith(protocol.ext_x_map):
_parse_x_map(**parse_kwargs)
elif line.startswith(protocol.ext_x_start):
_parse_start(**parse_kwargs)
elif line.startswith(protocol.ext_x_server_control):
_parse_server_control(**parse_kwargs)
elif line.startswith(protocol.ext_x_part_inf):
_parse_part_inf(**parse_kwargs)
elif line.startswith(protocol.ext_x_rendition_report):
_parse_rendition_report(**parse_kwargs)
elif line.startswith(protocol.ext_x_part):
_parse_part(**parse_kwargs)
elif line.startswith(protocol.ext_x_skip):
_parse_skip(**parse_kwargs)
elif line.startswith(protocol.ext_x_session_data):
_parse_session_data(**parse_kwargs)
elif line.startswith(protocol.ext_x_session_key):
_parse_session_key(**parse_kwargs)
elif line.startswith(protocol.ext_x_preload_hint):
_parse_preload_hint(**parse_kwargs)
elif line.startswith(protocol.ext_x_daterange):
_parse_daterange(**parse_kwargs)
elif line.startswith(protocol.ext_x_gap):
_parse_gap(**parse_kwargs)
elif line.startswith(protocol.ext_x_content_steering):
_parse_content_steering(**parse_kwargs)
elif line.startswith(protocol.ext_x_image_stream_inf):
_parse_image_stream_inf(**parse_kwargs)
elif line.startswith(protocol.ext_x_images_only):
_parse_is_images_only(**parse_kwargs)
elif line.startswith(protocol.ext_x_tiles):
_parse_tiles(**parse_kwargs)
# #EXTM3U should be present.
elif line.startswith(protocol.ext_m3u):
pass
# Blank lines are ignored.
elif line.strip() == "":
pass
# Lines that don't start with # are either segments or playlists.
elif (not line.startswith("#")) and (state["expect_segment"]):
_parse_ts_chunk(**parse_kwargs)
elif (not line.startswith("#")) and (state["expect_playlist"]):
_parse_variant_playlist(**parse_kwargs)
# Lines that haven't been recognized by any of the parsers above are illegal
# in strict mode.
elif strict:
raise ParseError(lineno, line)
# Handle remaining partial segments.
if "segment" in state:
data["segments"].append(state.pop("segment"))
return data
def _parse_key(line, data, state, **kwargs):
params = ATTRIBUTELISTPATTERN.split(line.replace(protocol.ext_x_key + ":", ""))[
1::2
]
key = {}
for param in params:
name, value = param.split("=", 1)
key[normalize_attribute(name)] = remove_quotes(value)
state["current_key"] = key
if key not in data["keys"]:
data["keys"].append(key)
def _parse_extinf(line, state, lineno, strict, **kwargs):
chunks = line.replace(protocol.extinf + ":", "").split(",", 1)
if len(chunks) == 2:
duration, title = chunks
elif len(chunks) == 1:
if strict:
raise ParseError(lineno, line)
else:
duration = chunks[0]
title = ""
if "segment" not in state:
state["segment"] = {}
state["segment"]["duration"] = float(duration)
state["segment"]["title"] = title
state["expect_segment"] = True
def _parse_ts_chunk(line, data, state, **kwargs):
segment = state.pop("segment")
if state.get("program_date_time"):
segment["program_date_time"] = state.pop("program_date_time")
if state.get("current_program_date_time"):
segment["current_program_date_time"] = state["current_program_date_time"]
state["current_program_date_time"] += timedelta(seconds=segment["duration"])
segment["uri"] = line
segment["cue_in"] = state.pop("cue_in", False)
segment["cue_out"] = state.pop("cue_out", False)
segment["cue_out_start"] = state.pop("cue_out_start", False)
segment["cue_out_explicitly_duration"] = state.pop(
"cue_out_explicitly_duration", False
)
scte_op = state.get if segment["cue_out"] else state.pop
segment["scte35"] = scte_op("current_cue_out_scte35", None)
segment["oatcls_scte35"] = scte_op("current_cue_out_oatcls_scte35", None)
segment["scte35_duration"] = scte_op("current_cue_out_duration", None)
segment["scte35_elapsedtime"] = scte_op("current_cue_out_elapsedtime", None)
segment["asset_metadata"] = scte_op("asset_metadata", None)
segment["discontinuity"] = state.pop("discontinuity", False)
if state.get("current_key"):
segment["key"] = state["current_key"]
else:
# For unencrypted segments, the initial key would be None
if None not in data["keys"]:
data["keys"].append(None)
if state.get("current_segment_map"):
segment["init_section"] = state["current_segment_map"]
segment["dateranges"] = state.pop("dateranges", None)
segment["gap_tag"] = state.pop("gap", None)
data["segments"].append(segment)
state["expect_segment"] = False
def _parse_attribute_list(prefix, line, attribute_parser, default_parser=None):
params = ATTRIBUTELISTPATTERN.split(line.replace(prefix + ":", ""))[1::2]
attributes = {}
if not line.startswith(prefix + ":"):
return attributes
for param in params:
param_parts = param.split("=", 1)
if len(param_parts) == 1:
name = ""
value = param_parts[0]
else:
name, value = param_parts
name = normalize_attribute(name)
if name in attribute_parser:
value = attribute_parser[name](value)
elif default_parser is not None:
value = default_parser(value)
attributes[name] = value
return attributes
def _parse_stream_inf(line, data, state, **kwargs):
state["expect_playlist"] = True
data["is_variant"] = True
data["media_sequence"] = None
attribute_parser = remove_quotes_parser(
"codecs",
"audio",
"video",
"video_range",
"subtitles",
"pathway_id",
"stable_variant_id",
)
attribute_parser["program_id"] = int
attribute_parser["bandwidth"] = lambda x: int(float(x))
attribute_parser["average_bandwidth"] = int
attribute_parser["frame_rate"] = float
attribute_parser["hdcp_level"] = str
state["stream_info"] = _parse_attribute_list(
protocol.ext_x_stream_inf, line, attribute_parser
)
def _parse_i_frame_stream_inf(line, data, **kwargs):
attribute_parser = remove_quotes_parser(
"codecs", "uri", "pathway_id", "stable_variant_id"
)
attribute_parser["program_id"] = int
attribute_parser["bandwidth"] = int
attribute_parser["average_bandwidth"] = int
attribute_parser["hdcp_level"] = str
iframe_stream_info = _parse_attribute_list(
protocol.ext_x_i_frame_stream_inf, line, attribute_parser
)
iframe_playlist = {
"uri": iframe_stream_info.pop("uri"),
"iframe_stream_info": iframe_stream_info,
}
data["iframe_playlists"].append(iframe_playlist)
def _parse_image_stream_inf(line, data, **kwargs):
attribute_parser = remove_quotes_parser(
"codecs", "uri", "pathway_id", "stable_variant_id"
)
attribute_parser["program_id"] = int
attribute_parser["bandwidth"] = int
attribute_parser["average_bandwidth"] = int
attribute_parser["resolution"] = str
image_stream_info = _parse_attribute_list(
protocol.ext_x_image_stream_inf, line, attribute_parser
)
image_playlist = {
"uri": image_stream_info.pop("uri"),
"image_stream_info": image_stream_info,
}
data["image_playlists"].append(image_playlist)
def _parse_is_images_only(line, data, **kwargs):
data["is_images_only"] = True
def _parse_tiles(line, data, state, **kwargs):
attribute_parser = remove_quotes_parser("uri")
attribute_parser["resolution"] = str
attribute_parser["layout"] = str
attribute_parser["duration"] = float
tiles_info = _parse_attribute_list(protocol.ext_x_tiles, line, attribute_parser)
data["tiles"].append(tiles_info)
def _parse_media(line, data, **kwargs):
quoted = remove_quotes_parser(
"uri",
"group_id",
"language",
"assoc_language",
"name",
"instream_id",
"characteristics",
"channels",
"stable_rendition_id",
"thumbnails",
"image",
)
media = _parse_attribute_list(protocol.ext_x_media, line, quoted)
data["media"].append(media)
def _parse_variant_playlist(line, data, state, **kwargs):
playlist = {"uri": line, "stream_info": state.pop("stream_info")}
data["playlists"].append(playlist)
state["expect_playlist"] = False
def _parse_bitrate(state, **kwargs):
if "segment" not in state:
state["segment"] = {}
state["segment"]["bitrate"] = _parse_simple_parameter(cast_to=int, **kwargs)
def _parse_byterange(line, state, **kwargs):
if "segment" not in state:
state["segment"] = {}
state["segment"]["byterange"] = line.replace(protocol.ext_x_byterange + ":", "")
state["expect_segment"] = True
def _parse_targetduration(**parse_kwargs):
return _parse_simple_parameter(cast_to=int, **parse_kwargs)
def _parse_media_sequence(**parse_kwargs):
return _parse_simple_parameter(cast_to=int, **parse_kwargs)
def _parse_discontinuity_sequence(**parse_kwargs):
return _parse_simple_parameter(cast_to=int, **parse_kwargs)
def _parse_program_date_time(line, state, data, **parse_kwargs):
_, program_date_time = _parse_simple_parameter_raw_value(
line, cast_to=cast_date_time, **parse_kwargs
)
if not data.get("program_date_time"):
data["program_date_time"] = program_date_time
state["current_program_date_time"] = program_date_time
state["program_date_time"] = program_date_time
def _parse_discontinuity(state, **parse_kwargs):
state["discontinuity"] = True
def _parse_cue_in(state, **parse_kwargs):
state["cue_in"] = True
def _parse_cue_span(state, **parse_kwargs):
state["cue_out"] = True
def _parse_version(**parse_kwargs):
return _parse_simple_parameter(cast_to=int, **parse_kwargs)
def _parse_allow_cache(**parse_kwargs):
return _parse_simple_parameter(cast_to=str, **parse_kwargs)
def _parse_playlist_type(line, data, **kwargs):
return _parse_simple_parameter(line, data)
def _parse_x_map(line, data, state, **kwargs):
quoted_parser = remove_quotes_parser("uri", "byterange")
segment_map_info = _parse_attribute_list(protocol.ext_x_map, line, quoted_parser)
state["current_segment_map"] = segment_map_info
data["segment_map"].append(segment_map_info)
def _parse_start(line, data, **kwargs):
attribute_parser = {"time_offset": lambda x: float(x)}
start_info = _parse_attribute_list(protocol.ext_x_start, line, attribute_parser)
data["start"] = start_info
def _parse_gap(state, **kwargs):
state["gap"] = True
def _parse_simple_parameter_raw_value(line, cast_to=str, normalize=False, **kwargs):
param, value = line.split(":", 1)
param = normalize_attribute(param.replace("#EXT-X-", ""))
if normalize:
value = value.strip().lower()
return param, cast_to(value)
def _parse_and_set_simple_parameter_raw_value(
line, data, cast_to=str, normalize=False, **kwargs
):
param, value = _parse_simple_parameter_raw_value(line, cast_to, normalize)
data[param] = value
return data[param]
def _parse_simple_parameter(line, data, cast_to=str, **kwargs):
return _parse_and_set_simple_parameter_raw_value(line, data, cast_to, True)
def _parse_i_frames_only(data, **kwargs):
data["is_i_frames_only"] = True
def _parse_is_independent_segments(data, **kwargs):
data["is_independent_segments"] = True
def _parse_endlist(data, **kwargs):
data["is_endlist"] = True
def _parse_cueout_cont(line, state, **kwargs):
state["cue_out"] = True
elements = line.split(":", 1)
if len(elements) != 2:
return
# EXT-X-CUE-OUT-CONT:ElapsedTime=10,Duration=60,SCTE35=... style
cue_info = _parse_attribute_list(
protocol.ext_x_cue_out_cont,
line,
remove_quotes_parser("duration", "elapsedtime", "scte35"),
)
# EXT-X-CUE-OUT-CONT:2.436/120 style
progress = cue_info.get("")
if progress:
progress_parts = progress.split("/", 1)
if len(progress_parts) == 1:
state["current_cue_out_duration"] = progress_parts[0]
else:
state["current_cue_out_elapsedtime"] = progress_parts[0]
state["current_cue_out_duration"] = progress_parts[1]
duration = cue_info.get("duration")
if duration:
state["current_cue_out_duration"] = duration
scte35 = cue_info.get("scte35")
if duration:
state["current_cue_out_scte35"] = scte35
elapsedtime = cue_info.get("elapsedtime")
if elapsedtime:
state["current_cue_out_elapsedtime"] = elapsedtime
def _parse_cueout(line, state, **kwargs):
state["cue_out_start"] = True
state["cue_out"] = True
if "DURATION" in line.upper():
state["cue_out_explicitly_duration"] = True
elements = line.split(":", 1)
if len(elements) != 2:
return
cue_info = _parse_attribute_list(
protocol.ext_x_cue_out,
line,
remove_quotes_parser("cue"),
)
cue_out_scte35 = cue_info.get("cue")
cue_out_duration = cue_info.get("duration") or cue_info.get("")
current_cue_out_scte35 = state.get("current_cue_out_scte35")
state["current_cue_out_scte35"] = cue_out_scte35 or current_cue_out_scte35
state["current_cue_out_duration"] = cue_out_duration
def _parse_server_control(line, data, **kwargs):
attribute_parser = {
"can_block_reload": str,
"hold_back": lambda x: float(x),
"part_hold_back": lambda x: float(x),
"can_skip_until": lambda x: float(x),
"can_skip_dateranges": str,
}
data["server_control"] = _parse_attribute_list(
protocol.ext_x_server_control, line, attribute_parser
)
def _parse_part_inf(line, data, **kwargs):
attribute_parser = {"part_target": lambda x: float(x)}
data["part_inf"] = _parse_attribute_list(
protocol.ext_x_part_inf, line, attribute_parser
)
def _parse_rendition_report(line, data, **kwargs):
attribute_parser = remove_quotes_parser("uri")
attribute_parser["last_msn"] = int
attribute_parser["last_part"] = int
rendition_report = _parse_attribute_list(
protocol.ext_x_rendition_report, line, attribute_parser
)
data["rendition_reports"].append(rendition_report)
def _parse_part(line, state, **kwargs):
attribute_parser = remove_quotes_parser("uri")
attribute_parser["duration"] = lambda x: float(x)
attribute_parser["independent"] = str
attribute_parser["gap"] = str
attribute_parser["byterange"] = str
part = _parse_attribute_list(protocol.ext_x_part, line, attribute_parser)
# this should always be true according to spec
if state.get("current_program_date_time"):
part["program_date_time"] = state["current_program_date_time"]
state["current_program_date_time"] += timedelta(seconds=part["duration"])
part["dateranges"] = state.pop("dateranges", None)
part["gap_tag"] = state.pop("gap", None)
if "segment" not in state:
state["segment"] = {}
segment = state["segment"]
if "parts" not in segment:
segment["parts"] = []
segment["parts"].append(part)
def _parse_skip(line, data, **parse_kwargs):
attribute_parser = remove_quotes_parser("recently_removed_dateranges")
attribute_parser["skipped_segments"] = int
data["skip"] = _parse_attribute_list(protocol.ext_x_skip, line, attribute_parser)
def _parse_session_data(line, data, **kwargs):
quoted = remove_quotes_parser("data_id", "value", "uri", "language")
session_data = _parse_attribute_list(protocol.ext_x_session_data, line, quoted)
data["session_data"].append(session_data)
def _parse_session_key(line, data, **kwargs):
params = ATTRIBUTELISTPATTERN.split(
line.replace(protocol.ext_x_session_key + ":", "")
)[1::2]
key = {}
for param in params:
name, value = param.split("=", 1)
key[normalize_attribute(name)] = remove_quotes(value)
data["session_keys"].append(key)
def _parse_preload_hint(line, data, **kwargs):
attribute_parser = remove_quotes_parser("uri")
attribute_parser["type"] = str
attribute_parser["byterange_start"] = int
attribute_parser["byterange_length"] = int
data["preload_hint"] = _parse_attribute_list(
protocol.ext_x_preload_hint, line, attribute_parser
)
def _parse_daterange(line, state, **kwargs):
attribute_parser = remove_quotes_parser("id", "class", "start_date", "end_date")
attribute_parser["duration"] = float
attribute_parser["planned_duration"] = float
attribute_parser["end_on_next"] = str
attribute_parser["scte35_cmd"] = str
attribute_parser["scte35_out"] = str
attribute_parser["scte35_in"] = str
parsed = _parse_attribute_list(protocol.ext_x_daterange, line, attribute_parser)
if "dateranges" not in state:
state["dateranges"] = []
state["dateranges"].append(parsed)
def _parse_content_steering(line, data, **kwargs):
attribute_parser = remove_quotes_parser("server_uri", "pathway_id")
data["content_steering"] = _parse_attribute_list(
protocol.ext_x_content_steering, line, attribute_parser
)
def _parse_oatcls_scte35(line, state, **kwargs):
scte35_cue = line.split(":", 1)[1]
state["current_cue_out_oatcls_scte35"] = scte35_cue
state["current_cue_out_scte35"] = scte35_cue
def _parse_asset(line, state, **kwargs):
# EXT-X-ASSET attribute values may or may not be quoted, and need to be URL-encoded.
# They are preserved as-is here to prevent loss of information.
state["asset_metadata"] = _parse_attribute_list(
protocol.ext_x_asset, line, {}, default_parser=str
)
def string_to_lines(string):
return string.strip().splitlines()
def remove_quotes_parser(*attrs):
return dict(zip(attrs, itertools.repeat(remove_quotes)))
def remove_quotes(string):
"""
Remove quotes from string.
Ex.:
"foo" -> foo
'foo' -> foo
'foo -> 'foo
"""
quotes = ('"', "'")
if string.startswith(quotes) and string.endswith(quotes):
return string[1:-1]
return string
def normalize_attribute(attribute):
return attribute.replace("-", "_").lower().strip()
def get_segment_custom_value(state, key, default=None):
"""
Helper function for getting custom values for Segment
Are useful with custom_tags_parser
"""
if "segment" not in state:
return default
if "custom_parser_values" not in state["segment"]:
return default
return state["segment"]["custom_parser_values"].get(key, default)
def save_segment_custom_value(state, key, value):
"""
Helper function for saving custom values for Segment
Are useful with custom_tags_parser
"""
if "segment" not in state:
state["segment"] = {}
if "custom_parser_values" not in state["segment"]:
state["segment"]["custom_parser_values"] = {}
state["segment"]["custom_parser_values"][key] = value

View File

@ -0,0 +1,45 @@
# Copyright 2014 Globo.com Player authors. All rights reserved.
# Use of this source code is governed by a MIT License
# license that can be found in the LICENSE file.
ext_m3u = "#EXTM3U"
ext_x_targetduration = "#EXT-X-TARGETDURATION"
ext_x_media_sequence = "#EXT-X-MEDIA-SEQUENCE"
ext_x_discontinuity_sequence = "#EXT-X-DISCONTINUITY-SEQUENCE"
ext_x_program_date_time = "#EXT-X-PROGRAM-DATE-TIME"
ext_x_media = "#EXT-X-MEDIA"
ext_x_playlist_type = "#EXT-X-PLAYLIST-TYPE"
ext_x_key = "#EXT-X-KEY"
ext_x_stream_inf = "#EXT-X-STREAM-INF"
ext_x_version = "#EXT-X-VERSION"
ext_x_allow_cache = "#EXT-X-ALLOW-CACHE"
ext_x_endlist = "#EXT-X-ENDLIST"
extinf = "#EXTINF"
ext_i_frames_only = "#EXT-X-I-FRAMES-ONLY"
ext_x_asset = "#EXT-X-ASSET"
ext_x_bitrate = "#EXT-X-BITRATE"
ext_x_byterange = "#EXT-X-BYTERANGE"
ext_x_i_frame_stream_inf = "#EXT-X-I-FRAME-STREAM-INF"
ext_x_discontinuity = "#EXT-X-DISCONTINUITY"
ext_x_cue_out = "#EXT-X-CUE-OUT"
ext_x_cue_out_cont = "#EXT-X-CUE-OUT-CONT"
ext_x_cue_in = "#EXT-X-CUE-IN"
ext_x_cue_span = "#EXT-X-CUE-SPAN"
ext_oatcls_scte35 = "#EXT-OATCLS-SCTE35"
ext_is_independent_segments = "#EXT-X-INDEPENDENT-SEGMENTS"
ext_x_map = "#EXT-X-MAP"
ext_x_start = "#EXT-X-START"
ext_x_server_control = "#EXT-X-SERVER-CONTROL"
ext_x_part_inf = "#EXT-X-PART-INF"
ext_x_part = "#EXT-X-PART"
ext_x_rendition_report = "#EXT-X-RENDITION-REPORT"
ext_x_skip = "#EXT-X-SKIP"
ext_x_session_data = "#EXT-X-SESSION-DATA"
ext_x_session_key = "#EXT-X-SESSION-KEY"
ext_x_preload_hint = "#EXT-X-PRELOAD-HINT"
ext_x_daterange = "#EXT-X-DATERANGE"
ext_x_gap = "#EXT-X-GAP"
ext_x_content_steering = "#EXT-X-CONTENT-STEERING"
ext_x_image_stream_inf = "#EXT-X-IMAGE-STREAM-INF"
ext_x_images_only = "#EXT-X-IMAGES-ONLY"
ext_x_tiles = "#EXT-X-TILES"

View File

@ -0,0 +1,37 @@
from m3u8 import protocol
from m3u8.version_matching_rules import VersionMatchingError, available_rules
def get_version(file_lines: list[str]):
for line in file_lines:
if line.startswith(protocol.ext_x_version):
version = line.split(":")[1]
return float(version)
return None
def valid_in_all_rules(
line_number: int, line: str, version: float
) -> list[VersionMatchingError]:
errors = []
for rule in available_rules:
validator = rule(version, line_number, line)
if not validator.validate():
errors.append(validator.get_error())
return errors
def validate(file_lines: list[str]) -> list[VersionMatchingError]:
found_version = get_version(file_lines)
if found_version is None:
return []
errors = []
for number, line in enumerate(file_lines):
errors_in_line = valid_in_all_rules(number, line, found_version)
errors.extend(errors_in_line)
return errors

View File

@ -0,0 +1,108 @@
from dataclasses import dataclass
from m3u8 import protocol
@dataclass
class VersionMatchingError(Exception):
line_number: int
line: str
how_to_fix: str = "Please fix the version matching error."
description: str = "There is a version matching error in the file."
def __str__(self):
return (
"Version matching error found in the file when parsing in strict mode.\n"
f"Line {self.line_number}: {self.description}\n"
f"Line content: {self.line}\n"
f"How to fix: {self.how_to_fix}"
"\n"
)
class VersionMatchRuleBase:
description: str = ""
how_to_fix: str = ""
version: float
line_number: int
line: str
def __init__(self, version: float, line_number: int, line: str) -> None:
self.version = version
self.line_number = line_number
self.line = line
def validate(self):
raise NotImplementedError
def get_error(self):
return VersionMatchingError(
line_number=self.line_number,
line=self.line,
description=self.description,
how_to_fix=self.how_to_fix,
)
class ValidIVInEXTXKEY(VersionMatchRuleBase):
description = (
"You must use at least protocol version 2 if you have IV in EXT-X-KEY."
)
how_to_fix = "Change the protocol version to 2 or higher."
def validate(self):
if protocol.ext_x_key not in self.line:
return True
if "IV" in self.line:
return self.version >= 2
return True
class ValidFloatingPointEXTINF(VersionMatchRuleBase):
description = "You must use at least protocol version 3 if you have floating point EXTINF duration values."
how_to_fix = "Change the protocol version to 3 or higher."
def validate(self):
if protocol.extinf not in self.line:
return True
chunks = self.line.replace(protocol.extinf + ":", "").split(",", 1)
duration = chunks[0]
def is_number(value: str):
try:
float(value)
return True
except ValueError:
return False
def is_floating_number(value: str):
return is_number(value) and "." in value
if is_floating_number(duration):
return self.version >= 3
return is_number(duration)
class ValidEXTXBYTERANGEOrEXTXIFRAMESONLY(VersionMatchRuleBase):
description = "You must use at least protocol version 4 if you have EXT-X-BYTERANGE or EXT-X-IFRAME-ONLY."
how_to_fix = "Change the protocol version to 4 or higher."
def validate(self):
if (
protocol.ext_x_byterange not in self.line
and protocol.ext_i_frames_only not in self.line
):
return True
return self.version >= 4
available_rules: list[type[VersionMatchRuleBase]] = [
ValidIVInEXTXKEY,
ValidFloatingPointEXTINF,
ValidEXTXBYTERANGEOrEXTXIFRAMESONLY,
]

View File

@ -0,0 +1,8 @@
-r requirements.txt
bottle
pytest
# pytest-cov 2.6.0 has increased the version requirement
# for the coverage package from >=3.7.1 to >=4.4,
# which is in conflict with the version requirement
# defined by the python-coveralls package for coverage==4.0.3
pytest-cov>=2.4.0,<2.6

View File

@ -0,0 +1 @@
backports-datetime-fromisoformat; python_version < '3.11'

36
scripts/m3u8/runtests Normal file
View File

@ -0,0 +1,36 @@
#!/bin/bash
test_server_stdout=tests/server.stdout
function install_deps {
pip install -r requirements-dev.txt
}
function start_server {
rm -f ${test_server_stdout}
python tests/m3u8server.py >${test_server_stdout} 2>&1 &
}
function stop_server {
pkill -9 -f m3u8server.py
echo "Test server stdout on ${test_server_stdout}"
}
function run {
PYTHONPATH=. py.test -vv --cov-report term-missing --cov m3u8 tests/
}
function main {
install_deps
start_server
run
retval=$?
stop_server
return "$retval"
}
if [ -z "$1" ]; then
main
else
"$@"
fi

28
scripts/m3u8/setup.py Normal file
View File

@ -0,0 +1,28 @@
from os.path import abspath, dirname, exists, join
from setuptools import setup
long_description = None
if exists("README.md"):
with open("README.md") as file:
long_description = file.read()
install_reqs = [
req for req in open(abspath(join(dirname(__file__), "requirements.txt")))
]
setup(
name="m3u8",
author="Globo.com",
version="6.0.0",
license="MIT",
zip_safe=False,
include_package_data=True,
install_requires=install_reqs,
packages=["m3u8"],
url="https://github.com/globocom/m3u8",
description="Python m3u8 parser",
long_description=long_description,
long_description_content_type="text/markdown",
python_requires=">=3.9",
)

View File

View File

@ -0,0 +1,33 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Copyright 2007 Google Inc. All Rights Reserved.
__version__ = '3.20.2'

View File

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/any.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"&\n\x03\x41ny\x12\x10\n\x08type_url\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c\x42v\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_ANY._serialized_start=46
_ANY._serialized_end=84
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/api.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2
from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\x81\x02\n\x03\x41pi\x12\x0c\n\x04name\x18\x01 \x01(\t\x12(\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Method\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x0f\n\x07version\x18\x04 \x01(\t\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12&\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.Mixin\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x01\n\x06Method\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x18\n\x10request_type_url\x18\x02 \x01(\t\x12\x19\n\x11request_streaming\x18\x03 \x01(\x08\x12\x19\n\x11response_type_url\x18\x04 \x01(\t\x12\x1a\n\x12response_streaming\x18\x05 \x01(\x08\x12(\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.Option\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"#\n\x05Mixin\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04root\x18\x02 \x01(\tBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.api_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_API._serialized_start=113
_API._serialized_end=370
_METHOD._serialized_start=373
_METHOD._serialized_end=586
_MIXIN._serialized_start=588
_MIXIN._serialized_end=623
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,35 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/compiler/plugin.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"F\n\x07Version\x12\r\n\x05major\x18\x01 \x01(\x05\x12\r\n\x05minor\x18\x02 \x01(\x05\x12\r\n\x05patch\x18\x03 \x01(\x05\x12\x0e\n\x06suffix\x18\x04 \x01(\t\"\xba\x01\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\x12;\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.Version\"\xc1\x02\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x1a\n\x12supported_features\x18\x02 \x01(\x04\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a\x7f\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\t\x12?\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfo\"8\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x42W\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb'
_VERSION._serialized_start=101
_VERSION._serialized_end=171
_CODEGENERATORREQUEST._serialized_start=174
_CODEGENERATORREQUEST._serialized_end=360
_CODEGENERATORRESPONSE._serialized_start=363
_CODEGENERATORRESPONSE._serialized_end=684
_CODEGENERATORRESPONSE_FILE._serialized_start=499
_CODEGENERATORRESPONSE_FILE._serialized_end=626
_CODEGENERATORRESPONSE_FEATURE._serialized_start=628
_CODEGENERATORRESPONSE_FEATURE._serialized_end=684
# @@protoc_insertion_point(module_scope)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,177 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides a container for DescriptorProtos."""
__author__ = 'matthewtoia@google.com (Matt Toia)'
import warnings
class Error(Exception):
pass
class DescriptorDatabaseConflictingDefinitionError(Error):
"""Raised when a proto is added with the same name & different descriptor."""
class DescriptorDatabase(object):
"""A container accepting FileDescriptorProtos and maps DescriptorProtos."""
def __init__(self):
self._file_desc_protos_by_file = {}
self._file_desc_protos_by_symbol = {}
def Add(self, file_desc_proto):
"""Adds the FileDescriptorProto and its types to this database.
Args:
file_desc_proto: The FileDescriptorProto to add.
Raises:
DescriptorDatabaseConflictingDefinitionError: if an attempt is made to
add a proto with the same name but different definition than an
existing proto in the database.
"""
proto_name = file_desc_proto.name
if proto_name not in self._file_desc_protos_by_file:
self._file_desc_protos_by_file[proto_name] = file_desc_proto
elif self._file_desc_protos_by_file[proto_name] != file_desc_proto:
raise DescriptorDatabaseConflictingDefinitionError(
'%s already added, but with different descriptor.' % proto_name)
else:
return
# Add all the top-level descriptors to the index.
package = file_desc_proto.package
for message in file_desc_proto.message_type:
for name in _ExtractSymbols(message, package):
self._AddSymbol(name, file_desc_proto)
for enum in file_desc_proto.enum_type:
self._AddSymbol(('.'.join((package, enum.name))), file_desc_proto)
for enum_value in enum.value:
self._file_desc_protos_by_symbol[
'.'.join((package, enum_value.name))] = file_desc_proto
for extension in file_desc_proto.extension:
self._AddSymbol(('.'.join((package, extension.name))), file_desc_proto)
for service in file_desc_proto.service:
self._AddSymbol(('.'.join((package, service.name))), file_desc_proto)
def FindFileByName(self, name):
"""Finds the file descriptor proto by file name.
Typically the file name is a relative path ending to a .proto file. The
proto with the given name will have to have been added to this database
using the Add method or else an error will be raised.
Args:
name: The file name to find.
Returns:
The file descriptor proto matching the name.
Raises:
KeyError if no file by the given name was added.
"""
return self._file_desc_protos_by_file[name]
def FindFileContainingSymbol(self, symbol):
"""Finds the file descriptor proto containing the specified symbol.
The symbol should be a fully qualified name including the file descriptor's
package and any containing messages. Some examples:
'some.package.name.Message'
'some.package.name.Message.NestedEnum'
'some.package.name.Message.some_field'
The file descriptor proto containing the specified symbol must be added to
this database using the Add method or else an error will be raised.
Args:
symbol: The fully qualified symbol name.
Returns:
The file descriptor proto containing the symbol.
Raises:
KeyError if no file contains the specified symbol.
"""
try:
return self._file_desc_protos_by_symbol[symbol]
except KeyError:
# Fields, enum values, and nested extensions are not in
# _file_desc_protos_by_symbol. Try to find the top level
# descriptor. Non-existent nested symbol under a valid top level
# descriptor can also be found. The behavior is the same with
# protobuf C++.
top_level, _, _ = symbol.rpartition('.')
try:
return self._file_desc_protos_by_symbol[top_level]
except KeyError:
# Raise the original symbol as a KeyError for better diagnostics.
raise KeyError(symbol)
def FindFileContainingExtension(self, extendee_name, extension_number):
# TODO(jieluo): implement this API.
return None
def FindAllExtensionNumbers(self, extendee_name):
# TODO(jieluo): implement this API.
return []
def _AddSymbol(self, name, file_desc_proto):
if name in self._file_desc_protos_by_symbol:
warn_msg = ('Conflict register for file "' + file_desc_proto.name +
'": ' + name +
' is already defined in file "' +
self._file_desc_protos_by_symbol[name].name + '"')
warnings.warn(warn_msg, RuntimeWarning)
self._file_desc_protos_by_symbol[name] = file_desc_proto
def _ExtractSymbols(desc_proto, package):
"""Pulls out all the symbols from a descriptor proto.
Args:
desc_proto: The proto to extract symbols from.
package: The package containing the descriptor type.
Yields:
The fully qualified name found in the descriptor.
"""
message_name = package + '.' + desc_proto.name if package else desc_proto.name
yield message_name
for nested_type in desc_proto.nested_type:
for symbol in _ExtractSymbols(nested_type, message_name):
yield symbol
for enum_type in desc_proto.enum_type:
yield '.'.join((message_name, enum_type.name))

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/duration.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\"*\n\x08\x44uration\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.duration_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_DURATION._serialized_start=51
_DURATION._serialized_end=93
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/empty.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_EMPTY._serialized_start=48
_EMPTY._serialized_end=55
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/field_mask.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"\x1a\n\tFieldMask\x12\r\n\x05paths\x18\x01 \x03(\tB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.field_mask_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_FIELDMASK._serialized_start=53
_FIELDMASK._serialized_end=79
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,443 @@
#! /usr/bin/env python
#
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Adds support for parameterized tests to Python's unittest TestCase class.
A parameterized test is a method in a test case that is invoked with different
argument tuples.
A simple example:
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
(1, 2, 3),
(4, 5, 9),
(1, 1, 3))
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
Each invocation is a separate test case and properly isolated just
like a normal test method, with its own setUp/tearDown cycle. In the
example above, there are three separate testcases, one of which will
fail due to an assertion error (1 + 1 != 3).
Parameters for individual test cases can be tuples (with positional parameters)
or dictionaries (with named parameters):
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
{'op1': 1, 'op2': 2, 'result': 3},
{'op1': 4, 'op2': 5, 'result': 9},
)
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
If a parameterized test fails, the error message will show the
original test name (which is modified internally) and the arguments
for the specific invocation, which are part of the string returned by
the shortDescription() method on test cases.
The id method of the test, used internally by the unittest framework,
is also modified to show the arguments. To make sure that test names
stay the same across several invocations, object representations like
>>> class Foo(object):
... pass
>>> repr(Foo())
'<__main__.Foo object at 0x23d8610>'
are turned into '<__main__.Foo>'. For even more descriptive names,
especially in test logs, you can use the named_parameters decorator. In
this case, only tuples are supported, and the first parameters has to
be a string (or an object that returns an apt name when converted via
str()):
class NamedExample(parameterized.TestCase):
@parameterized.named_parameters(
('Normal', 'aa', 'aaa', True),
('EmptyPrefix', '', 'abc', True),
('BothEmpty', '', '', True))
def testStartsWith(self, prefix, string, result):
self.assertEqual(result, strings.startswith(prefix))
Named tests also have the benefit that they can be run individually
from the command line:
$ testmodule.py NamedExample.testStartsWithNormal
.
--------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Parameterized Classes
=====================
If invocation arguments are shared across test methods in a single
TestCase class, instead of decorating all test methods
individually, the class itself can be decorated:
@parameterized.parameters(
(1, 2, 3)
(4, 5, 9))
class ArithmeticTest(parameterized.TestCase):
def testAdd(self, arg1, arg2, result):
self.assertEqual(arg1 + arg2, result)
def testSubtract(self, arg2, arg2, result):
self.assertEqual(result - arg1, arg2)
Inputs from Iterables
=====================
If parameters should be shared across several test cases, or are dynamically
created from other sources, a single non-tuple iterable can be passed into
the decorator. This iterable will be used to obtain the test cases:
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
c.op1, c.op2, c.result for c in testcases
)
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
Single-Argument Test Methods
============================
If a test method takes only one argument, the single argument does not need to
be wrapped into a tuple:
class NegativeNumberExample(parameterized.TestCase):
@parameterized.parameters(
-1, -3, -4, -5
)
def testIsNegative(self, arg):
self.assertTrue(IsNegative(arg))
"""
__author__ = 'tmarek@google.com (Torsten Marek)'
import functools
import re
import types
import unittest
import uuid
try:
# Since python 3
import collections.abc as collections_abc
except ImportError:
# Won't work after python 3.8
import collections as collections_abc
ADDR_RE = re.compile(r'\<([a-zA-Z0-9_\-\.]+) object at 0x[a-fA-F0-9]+\>')
_SEPARATOR = uuid.uuid1().hex
_FIRST_ARG = object()
_ARGUMENT_REPR = object()
def _CleanRepr(obj):
return ADDR_RE.sub(r'<\1>', repr(obj))
# Helper function formerly from the unittest module, removed from it in
# Python 2.7.
def _StrClass(cls):
return '%s.%s' % (cls.__module__, cls.__name__)
def _NonStringIterable(obj):
return (isinstance(obj, collections_abc.Iterable) and
not isinstance(obj, str))
def _FormatParameterList(testcase_params):
if isinstance(testcase_params, collections_abc.Mapping):
return ', '.join('%s=%s' % (argname, _CleanRepr(value))
for argname, value in testcase_params.items())
elif _NonStringIterable(testcase_params):
return ', '.join(map(_CleanRepr, testcase_params))
else:
return _FormatParameterList((testcase_params,))
class _ParameterizedTestIter(object):
"""Callable and iterable class for producing new test cases."""
def __init__(self, test_method, testcases, naming_type):
"""Returns concrete test functions for a test and a list of parameters.
The naming_type is used to determine the name of the concrete
functions as reported by the unittest framework. If naming_type is
_FIRST_ARG, the testcases must be tuples, and the first element must
have a string representation that is a valid Python identifier.
Args:
test_method: The decorated test method.
testcases: (list of tuple/dict) A list of parameter
tuples/dicts for individual test invocations.
naming_type: The test naming type, either _NAMED or _ARGUMENT_REPR.
"""
self._test_method = test_method
self.testcases = testcases
self._naming_type = naming_type
def __call__(self, *args, **kwargs):
raise RuntimeError('You appear to be running a parameterized test case '
'without having inherited from parameterized.'
'TestCase. This is bad because none of '
'your test cases are actually being run.')
def __iter__(self):
test_method = self._test_method
naming_type = self._naming_type
def MakeBoundParamTest(testcase_params):
@functools.wraps(test_method)
def BoundParamTest(self):
if isinstance(testcase_params, collections_abc.Mapping):
test_method(self, **testcase_params)
elif _NonStringIterable(testcase_params):
test_method(self, *testcase_params)
else:
test_method(self, testcase_params)
if naming_type is _FIRST_ARG:
# Signal the metaclass that the name of the test function is unique
# and descriptive.
BoundParamTest.__x_use_name__ = True
BoundParamTest.__name__ += str(testcase_params[0])
testcase_params = testcase_params[1:]
elif naming_type is _ARGUMENT_REPR:
# __x_extra_id__ is used to pass naming information to the __new__
# method of TestGeneratorMetaclass.
# The metaclass will make sure to create a unique, but nondescriptive
# name for this test.
BoundParamTest.__x_extra_id__ = '(%s)' % (
_FormatParameterList(testcase_params),)
else:
raise RuntimeError('%s is not a valid naming type.' % (naming_type,))
BoundParamTest.__doc__ = '%s(%s)' % (
BoundParamTest.__name__, _FormatParameterList(testcase_params))
if test_method.__doc__:
BoundParamTest.__doc__ += '\n%s' % (test_method.__doc__,)
return BoundParamTest
return (MakeBoundParamTest(c) for c in self.testcases)
def _IsSingletonList(testcases):
"""True iff testcases contains only a single non-tuple element."""
return len(testcases) == 1 and not isinstance(testcases[0], tuple)
def _ModifyClass(class_object, testcases, naming_type):
assert not getattr(class_object, '_id_suffix', None), (
'Cannot add parameters to %s,'
' which already has parameterized methods.' % (class_object,))
class_object._id_suffix = id_suffix = {}
# We change the size of __dict__ while we iterate over it,
# which Python 3.x will complain about, so use copy().
for name, obj in class_object.__dict__.copy().items():
if (name.startswith(unittest.TestLoader.testMethodPrefix)
and isinstance(obj, types.FunctionType)):
delattr(class_object, name)
methods = {}
_UpdateClassDictForParamTestCase(
methods, id_suffix, name,
_ParameterizedTestIter(obj, testcases, naming_type))
for name, meth in methods.items():
setattr(class_object, name, meth)
def _ParameterDecorator(naming_type, testcases):
"""Implementation of the parameterization decorators.
Args:
naming_type: The naming type.
testcases: Testcase parameters.
Returns:
A function for modifying the decorated object.
"""
def _Apply(obj):
if isinstance(obj, type):
_ModifyClass(
obj,
list(testcases) if not isinstance(testcases, collections_abc.Sequence)
else testcases,
naming_type)
return obj
else:
return _ParameterizedTestIter(obj, testcases, naming_type)
if _IsSingletonList(testcases):
assert _NonStringIterable(testcases[0]), (
'Single parameter argument must be a non-string iterable')
testcases = testcases[0]
return _Apply
def parameters(*testcases): # pylint: disable=invalid-name
"""A decorator for creating parameterized tests.
See the module docstring for a usage example.
Args:
*testcases: Parameters for the decorated method, either a single
iterable, or a list of tuples/dicts/objects (for tests
with only one argument).
Returns:
A test generator to be handled by TestGeneratorMetaclass.
"""
return _ParameterDecorator(_ARGUMENT_REPR, testcases)
def named_parameters(*testcases): # pylint: disable=invalid-name
"""A decorator for creating parameterized tests.
See the module docstring for a usage example. The first element of
each parameter tuple should be a string and will be appended to the
name of the test method.
Args:
*testcases: Parameters for the decorated method, either a single
iterable, or a list of tuples.
Returns:
A test generator to be handled by TestGeneratorMetaclass.
"""
return _ParameterDecorator(_FIRST_ARG, testcases)
class TestGeneratorMetaclass(type):
"""Metaclass for test cases with test generators.
A test generator is an iterable in a testcase that produces callables. These
callables must be single-argument methods. These methods are injected into
the class namespace and the original iterable is removed. If the name of the
iterable conforms to the test pattern, the injected methods will be picked
up as tests by the unittest framework.
In general, it is supposed to be used in conjunction with the
parameters decorator.
"""
def __new__(mcs, class_name, bases, dct):
dct['_id_suffix'] = id_suffix = {}
for name, obj in dct.copy().items():
if (name.startswith(unittest.TestLoader.testMethodPrefix) and
_NonStringIterable(obj)):
iterator = iter(obj)
dct.pop(name)
_UpdateClassDictForParamTestCase(dct, id_suffix, name, iterator)
return type.__new__(mcs, class_name, bases, dct)
def _UpdateClassDictForParamTestCase(dct, id_suffix, name, iterator):
"""Adds individual test cases to a dictionary.
Args:
dct: The target dictionary.
id_suffix: The dictionary for mapping names to test IDs.
name: The original name of the test case.
iterator: The iterator generating the individual test cases.
"""
for idx, func in enumerate(iterator):
assert callable(func), 'Test generators must yield callables, got %r' % (
func,)
if getattr(func, '__x_use_name__', False):
new_name = func.__name__
else:
new_name = '%s%s%d' % (name, _SEPARATOR, idx)
assert new_name not in dct, (
'Name of parameterized test case "%s" not unique' % (new_name,))
dct[new_name] = func
id_suffix[new_name] = getattr(func, '__x_extra_id__', '')
class TestCase(unittest.TestCase, metaclass=TestGeneratorMetaclass):
"""Base class for test cases using the parameters decorator."""
def _OriginalName(self):
return self._testMethodName.split(_SEPARATOR)[0]
def __str__(self):
return '%s (%s)' % (self._OriginalName(), _StrClass(self.__class__))
def id(self): # pylint: disable=invalid-name
"""Returns the descriptive ID of the test.
This is used internally by the unittesting framework to get a name
for the test to be used in reports.
Returns:
The test id.
"""
return '%s.%s%s' % (_StrClass(self.__class__),
self._OriginalName(),
self._id_suffix.get(self._testMethodName, ''))
def CoopTestCase(other_base_class):
"""Returns a new base class with a cooperative metaclass base.
This enables the TestCase to be used in combination
with other base classes that have custom metaclasses, such as
mox.MoxTestBase.
Only works with metaclasses that do not override type.__new__.
Example:
import google3
import mox
from google3.testing.pybase import parameterized
class ExampleTest(parameterized.CoopTestCase(mox.MoxTestBase)):
...
Args:
other_base_class: (class) A test case base class.
Returns:
A new class object.
"""
metaclass = type(
'CoopMetaclass',
(other_base_class.__metaclass__,
TestGeneratorMetaclass), {})
return metaclass(
'CoopTestCase',
(other_base_class, TestCase), {})

View File

@ -0,0 +1,112 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Determine which implementation of the protobuf API is used in this process.
"""
import os
import sys
import warnings
try:
# pylint: disable=g-import-not-at-top
from google.protobuf.internal import _api_implementation
# The compile-time constants in the _api_implementation module can be used to
# switch to a certain implementation of the Python API at build time.
_api_version = _api_implementation.api_version
except ImportError:
_api_version = -1 # Unspecified by compiler flags.
if _api_version == 1:
raise ValueError('api_version=1 is no longer supported.')
_default_implementation_type = ('cpp' if _api_version > 0 else 'python')
# This environment variable can be used to switch to a certain implementation
# of the Python API, overriding the compile-time constants in the
# _api_implementation module. Right now only 'python' and 'cpp' are valid
# values. Any other value will be ignored.
_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION',
_default_implementation_type)
if _implementation_type != 'python':
_implementation_type = 'cpp'
if 'PyPy' in sys.version and _implementation_type == 'cpp':
warnings.warn('PyPy does not work yet with cpp protocol buffers. '
'Falling back to the python implementation.')
_implementation_type = 'python'
# Detect if serialization should be deterministic by default
try:
# The presence of this module in a build allows the proto implementation to
# be upgraded merely via build deps.
#
# NOTE: Merely importing this automatically enables deterministic proto
# serialization for C++ code, but we still need to export it as a boolean so
# that we can do the same for `_implementation_type == 'python'`.
#
# NOTE2: It is possible for C++ code to enable deterministic serialization by
# default _without_ affecting Python code, if the C++ implementation is not in
# use by this module. That is intended behavior, so we don't actually expose
# this boolean outside of this module.
#
# pylint: disable=g-import-not-at-top,unused-import
from google.protobuf import enable_deterministic_proto_serialization
_python_deterministic_proto_serialization = True
except ImportError:
_python_deterministic_proto_serialization = False
# Usage of this function is discouraged. Clients shouldn't care which
# implementation of the API is in use. Note that there is no guarantee
# that differences between APIs will be maintained.
# Please don't use this function if possible.
def Type():
return _implementation_type
def _SetType(implementation_type):
"""Never use! Only for protobuf benchmark."""
global _implementation_type
_implementation_type = implementation_type
# See comment on 'Type' above.
def Version():
return 2
# For internal use only
def IsPythonDefaultSerializationDeterministic():
return _python_deterministic_proto_serialization

View File

@ -0,0 +1,130 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Builds descriptors, message classes and services for generated _pb2.py.
This file is only called in python generated _pb2.py files. It builds
descriptors, message classes and services that users can directly use
in generated code.
"""
__author__ = 'jieluo@google.com (Jie Luo)'
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
def BuildMessageAndEnumDescriptors(file_des, module):
"""Builds message and enum descriptors.
Args:
file_des: FileDescriptor of the .proto file
module: Generated _pb2 module
"""
def BuildNestedDescriptors(msg_des, prefix):
for (name, nested_msg) in msg_des.nested_types_by_name.items():
module_name = prefix + name.upper()
module[module_name] = nested_msg
BuildNestedDescriptors(nested_msg, module_name + '_')
for enum_des in msg_des.enum_types:
module[prefix + enum_des.name.upper()] = enum_des
for (name, msg_des) in file_des.message_types_by_name.items():
module_name = '_' + name.upper()
module[module_name] = msg_des
BuildNestedDescriptors(msg_des, module_name + '_')
def BuildTopDescriptorsAndMessages(file_des, module_name, module):
"""Builds top level descriptors and message classes.
Args:
file_des: FileDescriptor of the .proto file
module_name: str, the name of generated _pb2 module
module: Generated _pb2 module
"""
def BuildMessage(msg_des):
create_dict = {}
for (name, nested_msg) in msg_des.nested_types_by_name.items():
create_dict[name] = BuildMessage(nested_msg)
create_dict['DESCRIPTOR'] = msg_des
create_dict['__module__'] = module_name
message_class = _reflection.GeneratedProtocolMessageType(
msg_des.name, (_message.Message,), create_dict)
_sym_db.RegisterMessage(message_class)
return message_class
# top level enums
for (name, enum_des) in file_des.enum_types_by_name.items():
module['_' + name.upper()] = enum_des
module[name] = enum_type_wrapper.EnumTypeWrapper(enum_des)
for enum_value in enum_des.values:
module[enum_value.name] = enum_value.number
# top level extensions
for (name, extension_des) in file_des.extensions_by_name.items():
module[name.upper() + '_FIELD_NUMBER'] = extension_des.number
module[name] = extension_des
# services
for (name, service) in file_des.services_by_name.items():
module['_' + name.upper()] = service
# Build messages.
for (name, msg_des) in file_des.message_types_by_name.items():
module[name] = BuildMessage(msg_des)
def BuildServices(file_des, module_name, module):
"""Builds services classes and services stub class.
Args:
file_des: FileDescriptor of the .proto file
module_name: str, the name of generated _pb2 module
module: Generated _pb2 module
"""
# pylint: disable=g-import-not-at-top
from google.protobuf import service as _service
from google.protobuf import service_reflection
# pylint: enable=g-import-not-at-top
for (name, service) in file_des.services_by_name.items():
module[name] = service_reflection.GeneratedServiceType(
name, (_service.Service,),
dict(DESCRIPTOR=service, __module__=module_name))
stub_name = name + '_Stub'
module[stub_name] = service_reflection.GeneratedServiceStubType(
stub_name, (module[name],),
dict(DESCRIPTOR=service, __module__=module_name))

View File

@ -0,0 +1,710 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains container classes to represent different protocol buffer types.
This file defines container classes which represent categories of protocol
buffer field types which need extra maintenance. Currently these categories
are:
- Repeated scalar fields - These are all repeated fields which aren't
composite (e.g. they are of simple types like int32, string, etc).
- Repeated composite fields - Repeated fields which are composite. This
includes groups and nested messages.
"""
import collections.abc
import copy
import pickle
from typing import (
Any,
Iterable,
Iterator,
List,
MutableMapping,
MutableSequence,
NoReturn,
Optional,
Sequence,
TypeVar,
Union,
overload,
)
_T = TypeVar('_T')
_K = TypeVar('_K')
_V = TypeVar('_V')
class BaseContainer(Sequence[_T]):
"""Base container class."""
# Minimizes memory usage and disallows assignment to other attributes.
__slots__ = ['_message_listener', '_values']
def __init__(self, message_listener: Any) -> None:
"""
Args:
message_listener: A MessageListener implementation.
The RepeatedScalarFieldContainer will call this object's
Modified() method when it is modified.
"""
self._message_listener = message_listener
self._values = []
@overload
def __getitem__(self, key: int) -> _T:
...
@overload
def __getitem__(self, key: slice) -> List[_T]:
...
def __getitem__(self, key):
"""Retrieves item by the specified key."""
return self._values[key]
def __len__(self) -> int:
"""Returns the number of elements in the container."""
return len(self._values)
def __ne__(self, other: Any) -> bool:
"""Checks if another instance isn't equal to this one."""
# The concrete classes should define __eq__.
return not self == other
__hash__ = None
def __repr__(self) -> str:
return repr(self._values)
def sort(self, *args, **kwargs) -> None:
# Continue to support the old sort_function keyword argument.
# This is expected to be a rare occurrence, so use LBYL to avoid
# the overhead of actually catching KeyError.
if 'sort_function' in kwargs:
kwargs['cmp'] = kwargs.pop('sort_function')
self._values.sort(*args, **kwargs)
def reverse(self) -> None:
self._values.reverse()
# TODO(slebedev): Remove this. BaseContainer does *not* conform to
# MutableSequence, only its subclasses do.
collections.abc.MutableSequence.register(BaseContainer)
class RepeatedScalarFieldContainer(BaseContainer[_T], MutableSequence[_T]):
"""Simple, type-checked, list-like container for holding repeated scalars."""
# Disallows assignment to other attributes.
__slots__ = ['_type_checker']
def __init__(
self,
message_listener: Any,
type_checker: Any,
) -> None:
"""Args:
message_listener: A MessageListener implementation. The
RepeatedScalarFieldContainer will call this object's Modified() method
when it is modified.
type_checker: A type_checkers.ValueChecker instance to run on elements
inserted into this container.
"""
super().__init__(message_listener)
self._type_checker = type_checker
def append(self, value: _T) -> None:
"""Appends an item to the list. Similar to list.append()."""
self._values.append(self._type_checker.CheckValue(value))
if not self._message_listener.dirty:
self._message_listener.Modified()
def insert(self, key: int, value: _T) -> None:
"""Inserts the item at the specified position. Similar to list.insert()."""
self._values.insert(key, self._type_checker.CheckValue(value))
if not self._message_listener.dirty:
self._message_listener.Modified()
def extend(self, elem_seq: Iterable[_T]) -> None:
"""Extends by appending the given iterable. Similar to list.extend()."""
if elem_seq is None:
return
try:
elem_seq_iter = iter(elem_seq)
except TypeError:
if not elem_seq:
# silently ignore falsy inputs :-/.
# TODO(ptucker): Deprecate this behavior. b/18413862
return
raise
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
if new_values:
self._values.extend(new_values)
self._message_listener.Modified()
def MergeFrom(
self,
other: Union['RepeatedScalarFieldContainer[_T]', Iterable[_T]],
) -> None:
"""Appends the contents of another repeated field of the same type to this
one. We do not check the types of the individual fields.
"""
self._values.extend(other)
self._message_listener.Modified()
def remove(self, elem: _T):
"""Removes an item from the list. Similar to list.remove()."""
self._values.remove(elem)
self._message_listener.Modified()
def pop(self, key: Optional[int] = -1) -> _T:
"""Removes and returns an item at a given index. Similar to list.pop()."""
value = self._values[key]
self.__delitem__(key)
return value
@overload
def __setitem__(self, key: int, value: _T) -> None:
...
@overload
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
...
def __setitem__(self, key, value) -> None:
"""Sets the item on the specified position."""
if isinstance(key, slice):
if key.step is not None:
raise ValueError('Extended slices not supported')
self._values[key] = map(self._type_checker.CheckValue, value)
self._message_listener.Modified()
else:
self._values[key] = self._type_checker.CheckValue(value)
self._message_listener.Modified()
def __delitem__(self, key: Union[int, slice]) -> None:
"""Deletes the item at the specified position."""
del self._values[key]
self._message_listener.Modified()
def __eq__(self, other: Any) -> bool:
"""Compares the current instance with another one."""
if self is other:
return True
# Special case for the same type which should be common and fast.
if isinstance(other, self.__class__):
return other._values == self._values
# We are presumably comparing against some other sequence type.
return other == self._values
def __deepcopy__(
self,
unused_memo: Any = None,
) -> 'RepeatedScalarFieldContainer[_T]':
clone = RepeatedScalarFieldContainer(
copy.deepcopy(self._message_listener), self._type_checker)
clone.MergeFrom(self)
return clone
def __reduce__(self, **kwargs) -> NoReturn:
raise pickle.PickleError(
"Can't pickle repeated scalar fields, convert to list first")
# TODO(slebedev): Constrain T to be a subtype of Message.
class RepeatedCompositeFieldContainer(BaseContainer[_T], MutableSequence[_T]):
"""Simple, list-like container for holding repeated composite fields."""
# Disallows assignment to other attributes.
__slots__ = ['_message_descriptor']
def __init__(self, message_listener: Any, message_descriptor: Any) -> None:
"""
Note that we pass in a descriptor instead of the generated directly,
since at the time we construct a _RepeatedCompositeFieldContainer we
haven't yet necessarily initialized the type that will be contained in the
container.
Args:
message_listener: A MessageListener implementation.
The RepeatedCompositeFieldContainer will call this object's
Modified() method when it is modified.
message_descriptor: A Descriptor instance describing the protocol type
that should be present in this container. We'll use the
_concrete_class field of this descriptor when the client calls add().
"""
super().__init__(message_listener)
self._message_descriptor = message_descriptor
def add(self, **kwargs: Any) -> _T:
"""Adds a new element at the end of the list and returns it. Keyword
arguments may be used to initialize the element.
"""
new_element = self._message_descriptor._concrete_class(**kwargs)
new_element._SetListener(self._message_listener)
self._values.append(new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
return new_element
def append(self, value: _T) -> None:
"""Appends one element by copying the message."""
new_element = self._message_descriptor._concrete_class()
new_element._SetListener(self._message_listener)
new_element.CopyFrom(value)
self._values.append(new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
def insert(self, key: int, value: _T) -> None:
"""Inserts the item at the specified position by copying."""
new_element = self._message_descriptor._concrete_class()
new_element._SetListener(self._message_listener)
new_element.CopyFrom(value)
self._values.insert(key, new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
def extend(self, elem_seq: Iterable[_T]) -> None:
"""Extends by appending the given sequence of elements of the same type
as this one, copying each individual message.
"""
message_class = self._message_descriptor._concrete_class
listener = self._message_listener
values = self._values
for message in elem_seq:
new_element = message_class()
new_element._SetListener(listener)
new_element.MergeFrom(message)
values.append(new_element)
listener.Modified()
def MergeFrom(
self,
other: Union['RepeatedCompositeFieldContainer[_T]', Iterable[_T]],
) -> None:
"""Appends the contents of another repeated field of the same type to this
one, copying each individual message.
"""
self.extend(other)
def remove(self, elem: _T) -> None:
"""Removes an item from the list. Similar to list.remove()."""
self._values.remove(elem)
self._message_listener.Modified()
def pop(self, key: Optional[int] = -1) -> _T:
"""Removes and returns an item at a given index. Similar to list.pop()."""
value = self._values[key]
self.__delitem__(key)
return value
@overload
def __setitem__(self, key: int, value: _T) -> None:
...
@overload
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
...
def __setitem__(self, key, value):
# This method is implemented to make RepeatedCompositeFieldContainer
# structurally compatible with typing.MutableSequence. It is
# otherwise unsupported and will always raise an error.
raise TypeError(
f'{self.__class__.__name__} object does not support item assignment')
def __delitem__(self, key: Union[int, slice]) -> None:
"""Deletes the item at the specified position."""
del self._values[key]
self._message_listener.Modified()
def __eq__(self, other: Any) -> bool:
"""Compares the current instance with another one."""
if self is other:
return True
if not isinstance(other, self.__class__):
raise TypeError('Can only compare repeated composite fields against '
'other repeated composite fields.')
return self._values == other._values
class ScalarMap(MutableMapping[_K, _V]):
"""Simple, type-checked, dict-like container for holding repeated scalars."""
# Disallows assignment to other attributes.
__slots__ = ['_key_checker', '_value_checker', '_values', '_message_listener',
'_entry_descriptor']
def __init__(
self,
message_listener: Any,
key_checker: Any,
value_checker: Any,
entry_descriptor: Any,
) -> None:
"""
Args:
message_listener: A MessageListener implementation.
The ScalarMap will call this object's Modified() method when it
is modified.
key_checker: A type_checkers.ValueChecker instance to run on keys
inserted into this container.
value_checker: A type_checkers.ValueChecker instance to run on values
inserted into this container.
entry_descriptor: The MessageDescriptor of a map entry: key and value.
"""
self._message_listener = message_listener
self._key_checker = key_checker
self._value_checker = value_checker
self._entry_descriptor = entry_descriptor
self._values = {}
def __getitem__(self, key: _K) -> _V:
try:
return self._values[key]
except KeyError:
key = self._key_checker.CheckValue(key)
val = self._value_checker.DefaultValue()
self._values[key] = val
return val
def __contains__(self, item: _K) -> bool:
# We check the key's type to match the strong-typing flavor of the API.
# Also this makes it easier to match the behavior of the C++ implementation.
self._key_checker.CheckValue(item)
return item in self._values
@overload
def get(self, key: _K) -> Optional[_V]:
...
@overload
def get(self, key: _K, default: _T) -> Union[_V, _T]:
...
# We need to override this explicitly, because our defaultdict-like behavior
# will make the default implementation (from our base class) always insert
# the key.
def get(self, key, default=None):
if key in self:
return self[key]
else:
return default
def __setitem__(self, key: _K, value: _V) -> _T:
checked_key = self._key_checker.CheckValue(key)
checked_value = self._value_checker.CheckValue(value)
self._values[checked_key] = checked_value
self._message_listener.Modified()
def __delitem__(self, key: _K) -> None:
del self._values[key]
self._message_listener.Modified()
def __len__(self) -> int:
return len(self._values)
def __iter__(self) -> Iterator[_K]:
return iter(self._values)
def __repr__(self) -> str:
return repr(self._values)
def MergeFrom(self, other: 'ScalarMap[_K, _V]') -> None:
self._values.update(other._values)
self._message_listener.Modified()
def InvalidateIterators(self) -> None:
# It appears that the only way to reliably invalidate iterators to
# self._values is to ensure that its size changes.
original = self._values
self._values = original.copy()
original[None] = None
# This is defined in the abstract base, but we can do it much more cheaply.
def clear(self) -> None:
self._values.clear()
self._message_listener.Modified()
def GetEntryClass(self) -> Any:
return self._entry_descriptor._concrete_class
class MessageMap(MutableMapping[_K, _V]):
"""Simple, type-checked, dict-like container for with submessage values."""
# Disallows assignment to other attributes.
__slots__ = ['_key_checker', '_values', '_message_listener',
'_message_descriptor', '_entry_descriptor']
def __init__(
self,
message_listener: Any,
message_descriptor: Any,
key_checker: Any,
entry_descriptor: Any,
) -> None:
"""
Args:
message_listener: A MessageListener implementation.
The ScalarMap will call this object's Modified() method when it
is modified.
key_checker: A type_checkers.ValueChecker instance to run on keys
inserted into this container.
value_checker: A type_checkers.ValueChecker instance to run on values
inserted into this container.
entry_descriptor: The MessageDescriptor of a map entry: key and value.
"""
self._message_listener = message_listener
self._message_descriptor = message_descriptor
self._key_checker = key_checker
self._entry_descriptor = entry_descriptor
self._values = {}
def __getitem__(self, key: _K) -> _V:
key = self._key_checker.CheckValue(key)
try:
return self._values[key]
except KeyError:
new_element = self._message_descriptor._concrete_class()
new_element._SetListener(self._message_listener)
self._values[key] = new_element
self._message_listener.Modified()
return new_element
def get_or_create(self, key: _K) -> _V:
"""get_or_create() is an alias for getitem (ie. map[key]).
Args:
key: The key to get or create in the map.
This is useful in cases where you want to be explicit that the call is
mutating the map. This can avoid lint errors for statements like this
that otherwise would appear to be pointless statements:
msg.my_map[key]
"""
return self[key]
@overload
def get(self, key: _K) -> Optional[_V]:
...
@overload
def get(self, key: _K, default: _T) -> Union[_V, _T]:
...
# We need to override this explicitly, because our defaultdict-like behavior
# will make the default implementation (from our base class) always insert
# the key.
def get(self, key, default=None):
if key in self:
return self[key]
else:
return default
def __contains__(self, item: _K) -> bool:
item = self._key_checker.CheckValue(item)
return item in self._values
def __setitem__(self, key: _K, value: _V) -> NoReturn:
raise ValueError('May not set values directly, call my_map[key].foo = 5')
def __delitem__(self, key: _K) -> None:
key = self._key_checker.CheckValue(key)
del self._values[key]
self._message_listener.Modified()
def __len__(self) -> int:
return len(self._values)
def __iter__(self) -> Iterator[_K]:
return iter(self._values)
def __repr__(self) -> str:
return repr(self._values)
def MergeFrom(self, other: 'MessageMap[_K, _V]') -> None:
# pylint: disable=protected-access
for key in other._values:
# According to documentation: "When parsing from the wire or when merging,
# if there are duplicate map keys the last key seen is used".
if key in self:
del self[key]
self[key].CopyFrom(other[key])
# self._message_listener.Modified() not required here, because
# mutations to submessages already propagate.
def InvalidateIterators(self) -> None:
# It appears that the only way to reliably invalidate iterators to
# self._values is to ensure that its size changes.
original = self._values
self._values = original.copy()
original[None] = None
# This is defined in the abstract base, but we can do it much more cheaply.
def clear(self) -> None:
self._values.clear()
self._message_listener.Modified()
def GetEntryClass(self) -> Any:
return self._entry_descriptor._concrete_class
class _UnknownField:
"""A parsed unknown field."""
# Disallows assignment to other attributes.
__slots__ = ['_field_number', '_wire_type', '_data']
def __init__(self, field_number, wire_type, data):
self._field_number = field_number
self._wire_type = wire_type
self._data = data
return
def __lt__(self, other):
# pylint: disable=protected-access
return self._field_number < other._field_number
def __eq__(self, other):
if self is other:
return True
# pylint: disable=protected-access
return (self._field_number == other._field_number and
self._wire_type == other._wire_type and
self._data == other._data)
class UnknownFieldRef: # pylint: disable=missing-class-docstring
def __init__(self, parent, index):
self._parent = parent
self._index = index
def _check_valid(self):
if not self._parent:
raise ValueError('UnknownField does not exist. '
'The parent message might be cleared.')
if self._index >= len(self._parent):
raise ValueError('UnknownField does not exist. '
'The parent message might be cleared.')
@property
def field_number(self):
self._check_valid()
# pylint: disable=protected-access
return self._parent._internal_get(self._index)._field_number
@property
def wire_type(self):
self._check_valid()
# pylint: disable=protected-access
return self._parent._internal_get(self._index)._wire_type
@property
def data(self):
self._check_valid()
# pylint: disable=protected-access
return self._parent._internal_get(self._index)._data
class UnknownFieldSet:
"""UnknownField container"""
# Disallows assignment to other attributes.
__slots__ = ['_values']
def __init__(self):
self._values = []
def __getitem__(self, index):
if self._values is None:
raise ValueError('UnknownFields does not exist. '
'The parent message might be cleared.')
size = len(self._values)
if index < 0:
index += size
if index < 0 or index >= size:
raise IndexError('index %d out of range'.index)
return UnknownFieldRef(self, index)
def _internal_get(self, index):
return self._values[index]
def __len__(self):
if self._values is None:
raise ValueError('UnknownFields does not exist. '
'The parent message might be cleared.')
return len(self._values)
def _add(self, field_number, wire_type, data):
unknown_field = _UnknownField(field_number, wire_type, data)
self._values.append(unknown_field)
return unknown_field
def __iter__(self):
for i in range(len(self)):
yield UnknownFieldRef(self, i)
def _extend(self, other):
if other is None:
return
# pylint: disable=protected-access
self._values.extend(other._values)
def __eq__(self, other):
if self is other:
return True
# Sort unknown fields because their order shouldn't
# affect equality test.
values = list(self._values)
if other is None:
return not values
values.sort()
# pylint: disable=protected-access
other_values = sorted(other._values)
return values == other_values
def _clear(self):
for value in self._values:
# pylint: disable=protected-access
if isinstance(value._data, UnknownFieldSet):
value._data._clear() # pylint: disable=protected-access
self._values = None

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,829 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Code for encoding protocol message primitives.
Contains the logic for encoding every logical protocol field type
into one of the 5 physical wire types.
This code is designed to push the Python interpreter's performance to the
limits.
The basic idea is that at startup time, for every field (i.e. every
FieldDescriptor) we construct two functions: a "sizer" and an "encoder". The
sizer takes a value of this field's type and computes its byte size. The
encoder takes a writer function and a value. It encodes the value into byte
strings and invokes the writer function to write those strings. Typically the
writer function is the write() method of a BytesIO.
We try to do as much work as possible when constructing the writer and the
sizer rather than when calling them. In particular:
* We copy any needed global functions to local variables, so that we do not need
to do costly global table lookups at runtime.
* Similarly, we try to do any attribute lookups at startup time if possible.
* Every field's tag is encoded to bytes at startup, since it can't change at
runtime.
* Whatever component of the field size we can compute at startup, we do.
* We *avoid* sharing code if doing so would make the code slower and not sharing
does not burden us too much. For example, encoders for repeated fields do
not just call the encoders for singular fields in a loop because this would
add an extra function call overhead for every loop iteration; instead, we
manually inline the single-value encoder into the loop.
* If a Python function lacks a return statement, Python actually generates
instructions to pop the result of the last statement off the stack, push
None onto the stack, and then return that. If we really don't care what
value is returned, then we can save two instructions by returning the
result of the last statement. It looks funny but it helps.
* We assume that type and bounds checking has happened at a higher level.
"""
__author__ = 'kenton@google.com (Kenton Varda)'
import struct
from google.protobuf.internal import wire_format
# This will overflow and thus become IEEE-754 "infinity". We would use
# "float('inf')" but it doesn't work on Windows pre-Python-2.6.
_POS_INF = 1e10000
_NEG_INF = -_POS_INF
def _VarintSize(value):
"""Compute the size of a varint value."""
if value <= 0x7f: return 1
if value <= 0x3fff: return 2
if value <= 0x1fffff: return 3
if value <= 0xfffffff: return 4
if value <= 0x7ffffffff: return 5
if value <= 0x3ffffffffff: return 6
if value <= 0x1ffffffffffff: return 7
if value <= 0xffffffffffffff: return 8
if value <= 0x7fffffffffffffff: return 9
return 10
def _SignedVarintSize(value):
"""Compute the size of a signed varint value."""
if value < 0: return 10
if value <= 0x7f: return 1
if value <= 0x3fff: return 2
if value <= 0x1fffff: return 3
if value <= 0xfffffff: return 4
if value <= 0x7ffffffff: return 5
if value <= 0x3ffffffffff: return 6
if value <= 0x1ffffffffffff: return 7
if value <= 0xffffffffffffff: return 8
if value <= 0x7fffffffffffffff: return 9
return 10
def _TagSize(field_number):
"""Returns the number of bytes required to serialize a tag with this field
number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarintSize(wire_format.PackTag(field_number, 0))
# --------------------------------------------------------------------
# In this section we define some generic sizers. Each of these functions
# takes parameters specific to a particular field type, e.g. int32 or fixed64.
# It returns another function which in turn takes parameters specific to a
# particular field, e.g. the field number and whether it is repeated or packed.
# Look at the next section to see how these are used.
def _SimpleSizer(compute_value_size):
"""A sizer which uses the function compute_value_size to compute the size of
each value. Typically compute_value_size is _VarintSize."""
def SpecificSizer(field_number, is_repeated, is_packed):
tag_size = _TagSize(field_number)
if is_packed:
local_VarintSize = _VarintSize
def PackedFieldSize(value):
result = 0
for element in value:
result += compute_value_size(element)
return result + local_VarintSize(result) + tag_size
return PackedFieldSize
elif is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
result += compute_value_size(element)
return result
return RepeatedFieldSize
else:
def FieldSize(value):
return tag_size + compute_value_size(value)
return FieldSize
return SpecificSizer
def _ModifiedSizer(compute_value_size, modify_value):
"""Like SimpleSizer, but modify_value is invoked on each value before it is
passed to compute_value_size. modify_value is typically ZigZagEncode."""
def SpecificSizer(field_number, is_repeated, is_packed):
tag_size = _TagSize(field_number)
if is_packed:
local_VarintSize = _VarintSize
def PackedFieldSize(value):
result = 0
for element in value:
result += compute_value_size(modify_value(element))
return result + local_VarintSize(result) + tag_size
return PackedFieldSize
elif is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
result += compute_value_size(modify_value(element))
return result
return RepeatedFieldSize
else:
def FieldSize(value):
return tag_size + compute_value_size(modify_value(value))
return FieldSize
return SpecificSizer
def _FixedSizer(value_size):
"""Like _SimpleSizer except for a fixed-size field. The input is the size
of one value."""
def SpecificSizer(field_number, is_repeated, is_packed):
tag_size = _TagSize(field_number)
if is_packed:
local_VarintSize = _VarintSize
def PackedFieldSize(value):
result = len(value) * value_size
return result + local_VarintSize(result) + tag_size
return PackedFieldSize
elif is_repeated:
element_size = value_size + tag_size
def RepeatedFieldSize(value):
return len(value) * element_size
return RepeatedFieldSize
else:
field_size = value_size + tag_size
def FieldSize(value):
return field_size
return FieldSize
return SpecificSizer
# ====================================================================
# Here we declare a sizer constructor for each field type. Each "sizer
# constructor" is a function that takes (field_number, is_repeated, is_packed)
# as parameters and returns a sizer, which in turn takes a field value as
# a parameter and returns its encoded size.
Int32Sizer = Int64Sizer = EnumSizer = _SimpleSizer(_SignedVarintSize)
UInt32Sizer = UInt64Sizer = _SimpleSizer(_VarintSize)
SInt32Sizer = SInt64Sizer = _ModifiedSizer(
_SignedVarintSize, wire_format.ZigZagEncode)
Fixed32Sizer = SFixed32Sizer = FloatSizer = _FixedSizer(4)
Fixed64Sizer = SFixed64Sizer = DoubleSizer = _FixedSizer(8)
BoolSizer = _FixedSizer(1)
def StringSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a string field."""
tag_size = _TagSize(field_number)
local_VarintSize = _VarintSize
local_len = len
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
l = local_len(element.encode('utf-8'))
result += local_VarintSize(l) + l
return result
return RepeatedFieldSize
else:
def FieldSize(value):
l = local_len(value.encode('utf-8'))
return tag_size + local_VarintSize(l) + l
return FieldSize
def BytesSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a bytes field."""
tag_size = _TagSize(field_number)
local_VarintSize = _VarintSize
local_len = len
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
l = local_len(element)
result += local_VarintSize(l) + l
return result
return RepeatedFieldSize
else:
def FieldSize(value):
l = local_len(value)
return tag_size + local_VarintSize(l) + l
return FieldSize
def GroupSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a group field."""
tag_size = _TagSize(field_number) * 2
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
result += element.ByteSize()
return result
return RepeatedFieldSize
else:
def FieldSize(value):
return tag_size + value.ByteSize()
return FieldSize
def MessageSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a message field."""
tag_size = _TagSize(field_number)
local_VarintSize = _VarintSize
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
l = element.ByteSize()
result += local_VarintSize(l) + l
return result
return RepeatedFieldSize
else:
def FieldSize(value):
l = value.ByteSize()
return tag_size + local_VarintSize(l) + l
return FieldSize
# --------------------------------------------------------------------
# MessageSet is special: it needs custom logic to compute its size properly.
def MessageSetItemSizer(field_number):
"""Returns a sizer for extensions of MessageSet.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
static_size = (_TagSize(1) * 2 + _TagSize(2) + _VarintSize(field_number) +
_TagSize(3))
local_VarintSize = _VarintSize
def FieldSize(value):
l = value.ByteSize()
return static_size + local_VarintSize(l) + l
return FieldSize
# --------------------------------------------------------------------
# Map is special: it needs custom logic to compute its size properly.
def MapSizer(field_descriptor, is_message_map):
"""Returns a sizer for a map field."""
# Can't look at field_descriptor.message_type._concrete_class because it may
# not have been initialized yet.
message_type = field_descriptor.message_type
message_sizer = MessageSizer(field_descriptor.number, False, False)
def FieldSize(map_value):
total = 0
for key in map_value:
value = map_value[key]
# It's wasteful to create the messages and throw them away one second
# later since we'll do the same for the actual encode. But there's not an
# obvious way to avoid this within the current design without tons of code
# duplication. For message map, value.ByteSize() should be called to
# update the status.
entry_msg = message_type._concrete_class(key=key, value=value)
total += message_sizer(entry_msg)
if is_message_map:
value.ByteSize()
return total
return FieldSize
# ====================================================================
# Encoders!
def _VarintEncoder():
"""Return an encoder for a basic varint value (does not include tag)."""
local_int2byte = struct.Struct('>B').pack
def EncodeVarint(write, value, unused_deterministic=None):
bits = value & 0x7f
value >>= 7
while value:
write(local_int2byte(0x80|bits))
bits = value & 0x7f
value >>= 7
return write(local_int2byte(bits))
return EncodeVarint
def _SignedVarintEncoder():
"""Return an encoder for a basic signed varint value (does not include
tag)."""
local_int2byte = struct.Struct('>B').pack
def EncodeSignedVarint(write, value, unused_deterministic=None):
if value < 0:
value += (1 << 64)
bits = value & 0x7f
value >>= 7
while value:
write(local_int2byte(0x80|bits))
bits = value & 0x7f
value >>= 7
return write(local_int2byte(bits))
return EncodeSignedVarint
_EncodeVarint = _VarintEncoder()
_EncodeSignedVarint = _SignedVarintEncoder()
def _VarintBytes(value):
"""Encode the given integer as a varint and return the bytes. This is only
called at startup time so it doesn't need to be fast."""
pieces = []
_EncodeVarint(pieces.append, value, True)
return b"".join(pieces)
def TagBytes(field_number, wire_type):
"""Encode the given tag and return the bytes. Only called at startup."""
return bytes(_VarintBytes(wire_format.PackTag(field_number, wire_type)))
# --------------------------------------------------------------------
# As with sizers (see above), we have a number of common encoder
# implementations.
def _SimpleEncoder(wire_type, encode_value, compute_value_size):
"""Return a constructor for an encoder for fields of a particular type.
Args:
wire_type: The field's wire type, for encoding tags.
encode_value: A function which encodes an individual value, e.g.
_EncodeVarint().
compute_value_size: A function which computes the size of an individual
value, e.g. _VarintSize().
"""
def SpecificEncoder(field_number, is_repeated, is_packed):
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
size = 0
for element in value:
size += compute_value_size(element)
local_EncodeVarint(write, size, deterministic)
for element in value:
encode_value(write, element, deterministic)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag_bytes)
encode_value(write, element, deterministic)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, deterministic):
write(tag_bytes)
return encode_value(write, value, deterministic)
return EncodeField
return SpecificEncoder
def _ModifiedEncoder(wire_type, encode_value, compute_value_size, modify_value):
"""Like SimpleEncoder but additionally invokes modify_value on every value
before passing it to encode_value. Usually modify_value is ZigZagEncode."""
def SpecificEncoder(field_number, is_repeated, is_packed):
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
size = 0
for element in value:
size += compute_value_size(modify_value(element))
local_EncodeVarint(write, size, deterministic)
for element in value:
encode_value(write, modify_value(element), deterministic)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag_bytes)
encode_value(write, modify_value(element), deterministic)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, deterministic):
write(tag_bytes)
return encode_value(write, modify_value(value), deterministic)
return EncodeField
return SpecificEncoder
def _StructPackEncoder(wire_type, format):
"""Return a constructor for an encoder for a fixed-width field.
Args:
wire_type: The field's wire type, for encoding tags.
format: The format string to pass to struct.pack().
"""
value_size = struct.calcsize(format)
def SpecificEncoder(field_number, is_repeated, is_packed):
local_struct_pack = struct.pack
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
local_EncodeVarint(write, len(value) * value_size, deterministic)
for element in value:
write(local_struct_pack(format, element))
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, unused_deterministic=None):
for element in value:
write(tag_bytes)
write(local_struct_pack(format, element))
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, unused_deterministic=None):
write(tag_bytes)
return write(local_struct_pack(format, value))
return EncodeField
return SpecificEncoder
def _FloatingPointEncoder(wire_type, format):
"""Return a constructor for an encoder for float fields.
This is like StructPackEncoder, but catches errors that may be due to
passing non-finite floating-point values to struct.pack, and makes a
second attempt to encode those values.
Args:
wire_type: The field's wire type, for encoding tags.
format: The format string to pass to struct.pack().
"""
value_size = struct.calcsize(format)
if value_size == 4:
def EncodeNonFiniteOrRaise(write, value):
# Remember that the serialized form uses little-endian byte order.
if value == _POS_INF:
write(b'\x00\x00\x80\x7F')
elif value == _NEG_INF:
write(b'\x00\x00\x80\xFF')
elif value != value: # NaN
write(b'\x00\x00\xC0\x7F')
else:
raise
elif value_size == 8:
def EncodeNonFiniteOrRaise(write, value):
if value == _POS_INF:
write(b'\x00\x00\x00\x00\x00\x00\xF0\x7F')
elif value == _NEG_INF:
write(b'\x00\x00\x00\x00\x00\x00\xF0\xFF')
elif value != value: # NaN
write(b'\x00\x00\x00\x00\x00\x00\xF8\x7F')
else:
raise
else:
raise ValueError('Can\'t encode floating-point values that are '
'%d bytes long (only 4 or 8)' % value_size)
def SpecificEncoder(field_number, is_repeated, is_packed):
local_struct_pack = struct.pack
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
local_EncodeVarint(write, len(value) * value_size, deterministic)
for element in value:
# This try/except block is going to be faster than any code that
# we could write to check whether element is finite.
try:
write(local_struct_pack(format, element))
except SystemError:
EncodeNonFiniteOrRaise(write, element)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, unused_deterministic=None):
for element in value:
write(tag_bytes)
try:
write(local_struct_pack(format, element))
except SystemError:
EncodeNonFiniteOrRaise(write, element)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, unused_deterministic=None):
write(tag_bytes)
try:
write(local_struct_pack(format, value))
except SystemError:
EncodeNonFiniteOrRaise(write, value)
return EncodeField
return SpecificEncoder
# ====================================================================
# Here we declare an encoder constructor for each field type. These work
# very similarly to sizer constructors, described earlier.
Int32Encoder = Int64Encoder = EnumEncoder = _SimpleEncoder(
wire_format.WIRETYPE_VARINT, _EncodeSignedVarint, _SignedVarintSize)
UInt32Encoder = UInt64Encoder = _SimpleEncoder(
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize)
SInt32Encoder = SInt64Encoder = _ModifiedEncoder(
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize,
wire_format.ZigZagEncode)
# Note that Python conveniently guarantees that when using the '<' prefix on
# formats, they will also have the same size across all platforms (as opposed
# to without the prefix, where their sizes depend on the C compiler's basic
# type sizes).
Fixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<I')
Fixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<Q')
SFixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<i')
SFixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<q')
FloatEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED32, '<f')
DoubleEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED64, '<d')
def BoolEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a boolean field."""
false_byte = b'\x00'
true_byte = b'\x01'
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
local_EncodeVarint(write, len(value), deterministic)
for element in value:
if element:
write(true_byte)
else:
write(false_byte)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
def EncodeRepeatedField(write, value, unused_deterministic=None):
for element in value:
write(tag_bytes)
if element:
write(true_byte)
else:
write(false_byte)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
def EncodeField(write, value, unused_deterministic=None):
write(tag_bytes)
if value:
return write(true_byte)
return write(false_byte)
return EncodeField
def StringEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a string field."""
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
local_len = len
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
encoded = element.encode('utf-8')
write(tag)
local_EncodeVarint(write, local_len(encoded), deterministic)
write(encoded)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
encoded = value.encode('utf-8')
write(tag)
local_EncodeVarint(write, local_len(encoded), deterministic)
return write(encoded)
return EncodeField
def BytesEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a bytes field."""
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
local_len = len
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag)
local_EncodeVarint(write, local_len(element), deterministic)
write(element)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
write(tag)
local_EncodeVarint(write, local_len(value), deterministic)
return write(value)
return EncodeField
def GroupEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a group field."""
start_tag = TagBytes(field_number, wire_format.WIRETYPE_START_GROUP)
end_tag = TagBytes(field_number, wire_format.WIRETYPE_END_GROUP)
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(start_tag)
element._InternalSerialize(write, deterministic)
write(end_tag)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
write(start_tag)
value._InternalSerialize(write, deterministic)
return write(end_tag)
return EncodeField
def MessageEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a message field."""
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag)
local_EncodeVarint(write, element.ByteSize(), deterministic)
element._InternalSerialize(write, deterministic)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
write(tag)
local_EncodeVarint(write, value.ByteSize(), deterministic)
return value._InternalSerialize(write, deterministic)
return EncodeField
# --------------------------------------------------------------------
# As before, MessageSet is special.
def MessageSetItemEncoder(field_number):
"""Encoder for extensions of MessageSet.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
start_bytes = b"".join([
TagBytes(1, wire_format.WIRETYPE_START_GROUP),
TagBytes(2, wire_format.WIRETYPE_VARINT),
_VarintBytes(field_number),
TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)])
end_bytes = TagBytes(1, wire_format.WIRETYPE_END_GROUP)
local_EncodeVarint = _EncodeVarint
def EncodeField(write, value, deterministic):
write(start_bytes)
local_EncodeVarint(write, value.ByteSize(), deterministic)
value._InternalSerialize(write, deterministic)
return write(end_bytes)
return EncodeField
# --------------------------------------------------------------------
# As before, Map is special.
def MapEncoder(field_descriptor):
"""Encoder for extensions of MessageSet.
Maps always have a wire format like this:
message MapEntry {
key_type key = 1;
value_type value = 2;
}
repeated MapEntry map = N;
"""
# Can't look at field_descriptor.message_type._concrete_class because it may
# not have been initialized yet.
message_type = field_descriptor.message_type
encode_message = MessageEncoder(field_descriptor.number, False, False)
def EncodeField(write, value, deterministic):
value_keys = sorted(value.keys()) if deterministic else value
for key in value_keys:
entry_msg = message_type._concrete_class(key=key, value=value[key])
encode_message(write, entry_msg, deterministic)
return EncodeField

View File

@ -0,0 +1,124 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A simple wrapper around enum types to expose utility functions.
Instances are created as properties with the same name as the enum they wrap
on proto classes. For usage, see:
reflection_test.py
"""
__author__ = 'rabsatt@google.com (Kevin Rabsatt)'
class EnumTypeWrapper(object):
"""A utility for finding the names of enum values."""
DESCRIPTOR = None
# This is a type alias, which mypy typing stubs can type as
# a genericized parameter constrained to an int, allowing subclasses
# to be typed with more constraint in .pyi stubs
# Eg.
# def MyGeneratedEnum(Message):
# ValueType = NewType('ValueType', int)
# def Name(self, number: MyGeneratedEnum.ValueType) -> str
ValueType = int
def __init__(self, enum_type):
"""Inits EnumTypeWrapper with an EnumDescriptor."""
self._enum_type = enum_type
self.DESCRIPTOR = enum_type # pylint: disable=invalid-name
def Name(self, number): # pylint: disable=invalid-name
"""Returns a string containing the name of an enum value."""
try:
return self._enum_type.values_by_number[number].name
except KeyError:
pass # fall out to break exception chaining
if not isinstance(number, int):
raise TypeError(
'Enum value for {} must be an int, but got {} {!r}.'.format(
self._enum_type.name, type(number), number))
else:
# repr here to handle the odd case when you pass in a boolean.
raise ValueError('Enum {} has no name defined for value {!r}'.format(
self._enum_type.name, number))
def Value(self, name): # pylint: disable=invalid-name
"""Returns the value corresponding to the given enum name."""
try:
return self._enum_type.values_by_name[name].number
except KeyError:
pass # fall out to break exception chaining
raise ValueError('Enum {} has no value defined for name {!r}'.format(
self._enum_type.name, name))
def keys(self):
"""Return a list of the string names in the enum.
Returns:
A list of strs, in the order they were defined in the .proto file.
"""
return [value_descriptor.name
for value_descriptor in self._enum_type.values]
def values(self):
"""Return a list of the integer values in the enum.
Returns:
A list of ints, in the order they were defined in the .proto file.
"""
return [value_descriptor.number
for value_descriptor in self._enum_type.values]
def items(self):
"""Return a list of the (name, value) pairs of the enum.
Returns:
A list of (str, int) pairs, in the order they were defined
in the .proto file.
"""
return [(value_descriptor.name, value_descriptor.number)
for value_descriptor in self._enum_type.values]
def __getattr__(self, name):
"""Returns the value corresponding to the given enum name."""
try:
return super(
EnumTypeWrapper,
self).__getattribute__('_enum_type').values_by_name[name].number
except KeyError:
pass # fall out to break exception chaining
raise AttributeError('Enum {} has no value defined for name {!r}'.format(
self._enum_type.name, name))

View File

@ -0,0 +1,213 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains _ExtensionDict class to represent extensions.
"""
from google.protobuf.internal import type_checkers
from google.protobuf.descriptor import FieldDescriptor
def _VerifyExtensionHandle(message, extension_handle):
"""Verify that the given extension handle is valid."""
if not isinstance(extension_handle, FieldDescriptor):
raise KeyError('HasExtension() expects an extension handle, got: %s' %
extension_handle)
if not extension_handle.is_extension:
raise KeyError('"%s" is not an extension.' % extension_handle.full_name)
if not extension_handle.containing_type:
raise KeyError('"%s" is missing a containing_type.'
% extension_handle.full_name)
if extension_handle.containing_type is not message.DESCRIPTOR:
raise KeyError('Extension "%s" extends message type "%s", but this '
'message is of type "%s".' %
(extension_handle.full_name,
extension_handle.containing_type.full_name,
message.DESCRIPTOR.full_name))
# TODO(robinson): Unify error handling of "unknown extension" crap.
# TODO(robinson): Support iteritems()-style iteration over all
# extensions with the "has" bits turned on?
class _ExtensionDict(object):
"""Dict-like container for Extension fields on proto instances.
Note that in all cases we expect extension handles to be
FieldDescriptors.
"""
def __init__(self, extended_message):
"""
Args:
extended_message: Message instance for which we are the Extensions dict.
"""
self._extended_message = extended_message
def __getitem__(self, extension_handle):
"""Returns the current value of the given extension handle."""
_VerifyExtensionHandle(self._extended_message, extension_handle)
result = self._extended_message._fields.get(extension_handle)
if result is not None:
return result
if extension_handle.label == FieldDescriptor.LABEL_REPEATED:
result = extension_handle._default_constructor(self._extended_message)
elif extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
message_type = extension_handle.message_type
if not hasattr(message_type, '_concrete_class'):
# pylint: disable=protected-access
self._extended_message._FACTORY.GetPrototype(message_type)
assert getattr(extension_handle.message_type, '_concrete_class', None), (
'Uninitialized concrete class found for field %r (message type %r)'
% (extension_handle.full_name,
extension_handle.message_type.full_name))
result = extension_handle.message_type._concrete_class()
try:
result._SetListener(self._extended_message._listener_for_children)
except ReferenceError:
pass
else:
# Singular scalar -- just return the default without inserting into the
# dict.
return extension_handle.default_value
# Atomically check if another thread has preempted us and, if not, swap
# in the new object we just created. If someone has preempted us, we
# take that object and discard ours.
# WARNING: We are relying on setdefault() being atomic. This is true
# in CPython but we haven't investigated others. This warning appears
# in several other locations in this file.
result = self._extended_message._fields.setdefault(
extension_handle, result)
return result
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
my_fields = self._extended_message.ListFields()
other_fields = other._extended_message.ListFields()
# Get rid of non-extension fields.
my_fields = [field for field in my_fields if field.is_extension]
other_fields = [field for field in other_fields if field.is_extension]
return my_fields == other_fields
def __ne__(self, other):
return not self == other
def __len__(self):
fields = self._extended_message.ListFields()
# Get rid of non-extension fields.
extension_fields = [field for field in fields if field[0].is_extension]
return len(extension_fields)
def __hash__(self):
raise TypeError('unhashable object')
# Note that this is only meaningful for non-repeated, scalar extension
# fields. Note also that we may have to call _Modified() when we do
# successfully set a field this way, to set any necessary "has" bits in the
# ancestors of the extended message.
def __setitem__(self, extension_handle, value):
"""If extension_handle specifies a non-repeated, scalar extension
field, sets the value of that field.
"""
_VerifyExtensionHandle(self._extended_message, extension_handle)
if (extension_handle.label == FieldDescriptor.LABEL_REPEATED or
extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE):
raise TypeError(
'Cannot assign to extension "%s" because it is a repeated or '
'composite type.' % extension_handle.full_name)
# It's slightly wasteful to lookup the type checker each time,
# but we expect this to be a vanishingly uncommon case anyway.
type_checker = type_checkers.GetTypeChecker(extension_handle)
# pylint: disable=protected-access
self._extended_message._fields[extension_handle] = (
type_checker.CheckValue(value))
self._extended_message._Modified()
def __delitem__(self, extension_handle):
self._extended_message.ClearExtension(extension_handle)
def _FindExtensionByName(self, name):
"""Tries to find a known extension with the specified name.
Args:
name: Extension full name.
Returns:
Extension field descriptor.
"""
return self._extended_message._extensions_by_name.get(name, None)
def _FindExtensionByNumber(self, number):
"""Tries to find a known extension with the field number.
Args:
number: Extension field number.
Returns:
Extension field descriptor.
"""
return self._extended_message._extensions_by_number.get(number, None)
def __iter__(self):
# Return a generator over the populated extension fields
return (f[0] for f in self._extended_message.ListFields()
if f[0].is_extension)
def __contains__(self, extension_handle):
_VerifyExtensionHandle(self._extended_message, extension_handle)
if extension_handle not in self._extended_message._fields:
return False
if extension_handle.label == FieldDescriptor.LABEL_REPEATED:
return bool(self._extended_message._fields.get(extension_handle))
if extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
value = self._extended_message._fields.get(extension_handle)
# pylint: disable=protected-access
return value is not None and value._is_present_in_parent
return True

View File

@ -0,0 +1,78 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Defines a listener interface for observing certain
state transitions on Message objects.
Also defines a null implementation of this interface.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class MessageListener(object):
"""Listens for modifications made to a message. Meant to be registered via
Message._SetListener().
Attributes:
dirty: If True, then calling Modified() would be a no-op. This can be
used to avoid these calls entirely in the common case.
"""
def Modified(self):
"""Called every time the message is modified in such a way that the parent
message may need to be updated. This currently means either:
(a) The message was modified for the first time, so the parent message
should henceforth mark the message as present.
(b) The message's cached byte size became dirty -- i.e. the message was
modified for the first time after a previous call to ByteSize().
Therefore the parent should also mark its byte size as dirty.
Note that (a) implies (b), since new objects start out with a client cached
size (zero). However, we document (a) explicitly because it is important.
Modified() will *only* be called in response to one of these two events --
not every time the sub-message is modified.
Note that if the listener's |dirty| attribute is true, then calling
Modified at the moment would be a no-op, so it can be skipped. Performance-
sensitive callers should check this attribute directly before calling since
it will be true most of the time.
"""
raise NotImplementedError
class NullMessageListener(object):
"""No-op MessageListener implementation."""
def Modified(self):
pass

View File

@ -0,0 +1,36 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/message_set_extensions.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n5google/protobuf/internal/message_set_extensions.proto\x12\x18google.protobuf.internal\"\x1e\n\x0eTestMessageSet*\x08\x08\x04\x10\xff\xff\xff\xff\x07:\x02\x08\x01\"\xa5\x01\n\x18TestMessageSetExtension1\x12\t\n\x01i\x18\x0f \x01(\x05\x32~\n\x15message_set_extension\x12(.google.protobuf.internal.TestMessageSet\x18\xab\xff\xf6. \x01(\x0b\x32\x32.google.protobuf.internal.TestMessageSetExtension1\"\xa7\x01\n\x18TestMessageSetExtension2\x12\x0b\n\x03str\x18\x19 \x01(\t2~\n\x15message_set_extension\x12(.google.protobuf.internal.TestMessageSet\x18\xca\xff\xf6. \x01(\x0b\x32\x32.google.protobuf.internal.TestMessageSetExtension2\"(\n\x18TestMessageSetExtension3\x12\x0c\n\x04text\x18# \x01(\t:\x7f\n\x16message_set_extension3\x12(.google.protobuf.internal.TestMessageSet\x18\xdf\xff\xf6. \x01(\x0b\x32\x32.google.protobuf.internal.TestMessageSetExtension3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.message_set_extensions_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
TestMessageSet.RegisterExtension(message_set_extension3)
TestMessageSet.RegisterExtension(_TESTMESSAGESETEXTENSION1.extensions_by_name['message_set_extension'])
TestMessageSet.RegisterExtension(_TESTMESSAGESETEXTENSION2.extensions_by_name['message_set_extension'])
DESCRIPTOR._options = None
_TESTMESSAGESET._options = None
_TESTMESSAGESET._serialized_options = b'\010\001'
_TESTMESSAGESET._serialized_start=83
_TESTMESSAGESET._serialized_end=113
_TESTMESSAGESETEXTENSION1._serialized_start=116
_TESTMESSAGESETEXTENSION1._serialized_end=281
_TESTMESSAGESETEXTENSION2._serialized_start=284
_TESTMESSAGESETEXTENSION2._serialized_end=451
_TESTMESSAGESETEXTENSION3._serialized_start=453
_TESTMESSAGESETEXTENSION3._serialized_end=493
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,37 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/missing_enum_values.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n2google/protobuf/internal/missing_enum_values.proto\x12\x1fgoogle.protobuf.python.internal\"\xc1\x02\n\x0eTestEnumValues\x12X\n\x14optional_nested_enum\x18\x01 \x01(\x0e\x32:.google.protobuf.python.internal.TestEnumValues.NestedEnum\x12X\n\x14repeated_nested_enum\x18\x02 \x03(\x0e\x32:.google.protobuf.python.internal.TestEnumValues.NestedEnum\x12Z\n\x12packed_nested_enum\x18\x03 \x03(\x0e\x32:.google.protobuf.python.internal.TestEnumValues.NestedEnumB\x02\x10\x01\"\x1f\n\nNestedEnum\x12\x08\n\x04ZERO\x10\x00\x12\x07\n\x03ONE\x10\x01\"\xd3\x02\n\x15TestMissingEnumValues\x12_\n\x14optional_nested_enum\x18\x01 \x01(\x0e\x32\x41.google.protobuf.python.internal.TestMissingEnumValues.NestedEnum\x12_\n\x14repeated_nested_enum\x18\x02 \x03(\x0e\x32\x41.google.protobuf.python.internal.TestMissingEnumValues.NestedEnum\x12\x61\n\x12packed_nested_enum\x18\x03 \x03(\x0e\x32\x41.google.protobuf.python.internal.TestMissingEnumValues.NestedEnumB\x02\x10\x01\"\x15\n\nNestedEnum\x12\x07\n\x03TWO\x10\x02\"\x1b\n\nJustString\x12\r\n\x05\x64ummy\x18\x01 \x02(\t')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.missing_enum_values_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_TESTENUMVALUES.fields_by_name['packed_nested_enum']._options = None
_TESTENUMVALUES.fields_by_name['packed_nested_enum']._serialized_options = b'\020\001'
_TESTMISSINGENUMVALUES.fields_by_name['packed_nested_enum']._options = None
_TESTMISSINGENUMVALUES.fields_by_name['packed_nested_enum']._serialized_options = b'\020\001'
_TESTENUMVALUES._serialized_start=88
_TESTENUMVALUES._serialized_end=409
_TESTENUMVALUES_NESTEDENUM._serialized_start=378
_TESTENUMVALUES_NESTEDENUM._serialized_end=409
_TESTMISSINGENUMVALUES._serialized_start=412
_TESTMISSINGENUMVALUES._serialized_end=751
_TESTMISSINGENUMVALUES_NESTEDENUM._serialized_start=730
_TESTMISSINGENUMVALUES_NESTEDENUM._serialized_end=751
_JUSTSTRING._serialized_start=753
_JUSTSTRING._serialized_end=780
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/more_extensions_dynamic.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf.internal import more_extensions_pb2 as google_dot_protobuf_dot_internal_dot_more__extensions__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n6google/protobuf/internal/more_extensions_dynamic.proto\x12\x18google.protobuf.internal\x1a.google/protobuf/internal/more_extensions.proto\"\x1f\n\x12\x44ynamicMessageType\x12\t\n\x01\x61\x18\x01 \x01(\x05:J\n\x17\x64ynamic_int32_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x64 \x01(\x05:z\n\x19\x64ynamic_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x65 \x01(\x0b\x32,.google.protobuf.internal.DynamicMessageType:\x83\x01\n\"repeated_dynamic_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x66 \x03(\x0b\x32,.google.protobuf.internal.DynamicMessageType')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.more_extensions_dynamic_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(dynamic_int32_extension)
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(dynamic_message_extension)
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(repeated_dynamic_message_extension)
DESCRIPTOR._options = None
_DYNAMICMESSAGETYPE._serialized_start=132
_DYNAMICMESSAGETYPE._serialized_end=163
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,41 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/more_extensions.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n.google/protobuf/internal/more_extensions.proto\x12\x18google.protobuf.internal\"\x99\x01\n\x0fTopLevelMessage\x12\x41\n\nsubmessage\x18\x01 \x01(\x0b\x32).google.protobuf.internal.ExtendedMessageB\x02(\x01\x12\x43\n\x0enested_message\x18\x02 \x01(\x0b\x32\'.google.protobuf.internal.NestedMessageB\x02(\x01\"R\n\rNestedMessage\x12\x41\n\nsubmessage\x18\x01 \x01(\x0b\x32).google.protobuf.internal.ExtendedMessageB\x02(\x01\"K\n\x0f\x45xtendedMessage\x12\x17\n\x0eoptional_int32\x18\xe9\x07 \x01(\x05\x12\x18\n\x0frepeated_string\x18\xea\x07 \x03(\t*\x05\x08\x01\x10\xe8\x07\"-\n\x0e\x46oreignMessage\x12\x1b\n\x13\x66oreign_message_int\x18\x01 \x01(\x05:I\n\x16optional_int_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x01 \x01(\x05:w\n\x1aoptional_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x02 \x01(\x0b\x32(.google.protobuf.internal.ForeignMessage:I\n\x16repeated_int_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x03 \x03(\x05:w\n\x1arepeated_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x04 \x03(\x0b\x32(.google.protobuf.internal.ForeignMessage')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.more_extensions_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
ExtendedMessage.RegisterExtension(optional_int_extension)
ExtendedMessage.RegisterExtension(optional_message_extension)
ExtendedMessage.RegisterExtension(repeated_int_extension)
ExtendedMessage.RegisterExtension(repeated_message_extension)
DESCRIPTOR._options = None
_TOPLEVELMESSAGE.fields_by_name['submessage']._options = None
_TOPLEVELMESSAGE.fields_by_name['submessage']._serialized_options = b'(\001'
_TOPLEVELMESSAGE.fields_by_name['nested_message']._options = None
_TOPLEVELMESSAGE.fields_by_name['nested_message']._serialized_options = b'(\001'
_NESTEDMESSAGE.fields_by_name['submessage']._options = None
_NESTEDMESSAGE.fields_by_name['submessage']._serialized_options = b'(\001'
_TOPLEVELMESSAGE._serialized_start=77
_TOPLEVELMESSAGE._serialized_end=230
_NESTEDMESSAGE._serialized_start=232
_NESTEDMESSAGE._serialized_end=314
_EXTENDEDMESSAGE._serialized_start=316
_EXTENDEDMESSAGE._serialized_end=391
_FOREIGNMESSAGE._serialized_start=393
_FOREIGNMESSAGE._serialized_end=438
# @@protoc_insertion_point(module_scope)

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,27 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/no_package.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n)google/protobuf/internal/no_package.proto\";\n\x10NoPackageMessage\x12\'\n\x0fno_package_enum\x18\x01 \x01(\x0e\x32\x0e.NoPackageEnum*?\n\rNoPackageEnum\x12\x16\n\x12NO_PACKAGE_VALUE_0\x10\x00\x12\x16\n\x12NO_PACKAGE_VALUE_1\x10\x01')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.no_package_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_NOPACKAGEENUM._serialized_start=106
_NOPACKAGEENUM._serialized_end=169
_NOPACKAGEMESSAGE._serialized_start=45
_NOPACKAGEMESSAGE._serialized_end=104
# @@protoc_insertion_point(module_scope)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,435 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides type checking routines.
This module defines type checking utilities in the forms of dictionaries:
VALUE_CHECKERS: A dictionary of field types and a value validation object.
TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing
function.
TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization
function.
FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their
corresponding wire types.
TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization
function.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import ctypes
import numbers
from google.protobuf.internal import decoder
from google.protobuf.internal import encoder
from google.protobuf.internal import wire_format
from google.protobuf import descriptor
_FieldDescriptor = descriptor.FieldDescriptor
def TruncateToFourByteFloat(original):
return ctypes.c_float(original).value
def ToShortestFloat(original):
"""Returns the shortest float that has same value in wire."""
# All 4 byte floats have between 6 and 9 significant digits, so we
# start with 6 as the lower bound.
# It has to be iterative because use '.9g' directly can not get rid
# of the noises for most values. For example if set a float_field=0.9
# use '.9g' will print 0.899999976.
precision = 6
rounded = float('{0:.{1}g}'.format(original, precision))
while TruncateToFourByteFloat(rounded) != original:
precision += 1
rounded = float('{0:.{1}g}'.format(original, precision))
return rounded
def SupportsOpenEnums(field_descriptor):
return field_descriptor.containing_type.syntax == 'proto3'
def GetTypeChecker(field):
"""Returns a type checker for a message field of the specified types.
Args:
field: FieldDescriptor object for this field.
Returns:
An instance of TypeChecker which can be used to verify the types
of values assigned to a field of the specified type.
"""
if (field.cpp_type == _FieldDescriptor.CPPTYPE_STRING and
field.type == _FieldDescriptor.TYPE_STRING):
return UnicodeValueChecker()
if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM:
if SupportsOpenEnums(field):
# When open enums are supported, any int32 can be assigned.
return _VALUE_CHECKERS[_FieldDescriptor.CPPTYPE_INT32]
else:
return EnumValueChecker(field.enum_type)
return _VALUE_CHECKERS[field.cpp_type]
# None of the typecheckers below make any attempt to guard against people
# subclassing builtin types and doing weird things. We're not trying to
# protect against malicious clients here, just people accidentally shooting
# themselves in the foot in obvious ways.
class TypeChecker(object):
"""Type checker used to catch type errors as early as possible
when the client is setting scalar fields in protocol messages.
"""
def __init__(self, *acceptable_types):
self._acceptable_types = acceptable_types
def CheckValue(self, proposed_value):
"""Type check the provided value and return it.
The returned value might have been normalized to another type.
"""
if not isinstance(proposed_value, self._acceptable_types):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), self._acceptable_types))
raise TypeError(message)
return proposed_value
class TypeCheckerWithDefault(TypeChecker):
def __init__(self, default_value, *acceptable_types):
TypeChecker.__init__(self, *acceptable_types)
self._default_value = default_value
def DefaultValue(self):
return self._default_value
class BoolValueChecker(object):
"""Type checker used for bool fields."""
def CheckValue(self, proposed_value):
if not hasattr(proposed_value, '__index__') or (
type(proposed_value).__module__ == 'numpy' and
type(proposed_value).__name__ == 'ndarray'):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (bool, int)))
raise TypeError(message)
return bool(proposed_value)
def DefaultValue(self):
return False
# IntValueChecker and its subclasses perform integer type-checks
# and bounds-checks.
class IntValueChecker(object):
"""Checker used for integer fields. Performs type-check and range check."""
def CheckValue(self, proposed_value):
if not hasattr(proposed_value, '__index__') or (
type(proposed_value).__module__ == 'numpy' and
type(proposed_value).__name__ == 'ndarray'):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (int,)))
raise TypeError(message)
if not self._MIN <= int(proposed_value) <= self._MAX:
raise ValueError('Value out of range: %d' % proposed_value)
# We force all values to int to make alternate implementations where the
# distinction is more significant (e.g. the C++ implementation) simpler.
proposed_value = int(proposed_value)
return proposed_value
def DefaultValue(self):
return 0
class EnumValueChecker(object):
"""Checker used for enum fields. Performs type-check and range check."""
def __init__(self, enum_type):
self._enum_type = enum_type
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, numbers.Integral):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (int,)))
raise TypeError(message)
if int(proposed_value) not in self._enum_type.values_by_number:
raise ValueError('Unknown enum value: %d' % proposed_value)
return proposed_value
def DefaultValue(self):
return self._enum_type.values[0].number
class UnicodeValueChecker(object):
"""Checker used for string fields.
Always returns a unicode value, even if the input is of type str.
"""
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, (bytes, str)):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (bytes, str)))
raise TypeError(message)
# If the value is of type 'bytes' make sure that it is valid UTF-8 data.
if isinstance(proposed_value, bytes):
try:
proposed_value = proposed_value.decode('utf-8')
except UnicodeDecodeError:
raise ValueError('%.1024r has type bytes, but isn\'t valid UTF-8 '
'encoding. Non-UTF-8 strings must be converted to '
'unicode objects before being added.' %
(proposed_value))
else:
try:
proposed_value.encode('utf8')
except UnicodeEncodeError:
raise ValueError('%.1024r isn\'t a valid unicode string and '
'can\'t be encoded in UTF-8.'%
(proposed_value))
return proposed_value
def DefaultValue(self):
return u""
class Int32ValueChecker(IntValueChecker):
# We're sure to use ints instead of longs here since comparison may be more
# efficient.
_MIN = -2147483648
_MAX = 2147483647
class Uint32ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 32) - 1
class Int64ValueChecker(IntValueChecker):
_MIN = -(1 << 63)
_MAX = (1 << 63) - 1
class Uint64ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 64) - 1
# The max 4 bytes float is about 3.4028234663852886e+38
_FLOAT_MAX = float.fromhex('0x1.fffffep+127')
_FLOAT_MIN = -_FLOAT_MAX
_INF = float('inf')
_NEG_INF = float('-inf')
class DoubleValueChecker(object):
"""Checker used for double fields.
Performs type-check and range check.
"""
def CheckValue(self, proposed_value):
"""Check and convert proposed_value to float."""
if (not hasattr(proposed_value, '__float__') and
not hasattr(proposed_value, '__index__')) or (
type(proposed_value).__module__ == 'numpy' and
type(proposed_value).__name__ == 'ndarray'):
message = ('%.1024r has type %s, but expected one of: int, float' %
(proposed_value, type(proposed_value)))
raise TypeError(message)
return float(proposed_value)
def DefaultValue(self):
return 0.0
class FloatValueChecker(DoubleValueChecker):
"""Checker used for float fields.
Performs type-check and range check.
Values exceeding a 32-bit float will be converted to inf/-inf.
"""
def CheckValue(self, proposed_value):
"""Check and convert proposed_value to float."""
converted_value = super().CheckValue(proposed_value)
# This inf rounding matches the C++ proto SafeDoubleToFloat logic.
if converted_value > _FLOAT_MAX:
return _INF
if converted_value < _FLOAT_MIN:
return _NEG_INF
return TruncateToFourByteFloat(converted_value)
# Type-checkers for all scalar CPPTYPEs.
_VALUE_CHECKERS = {
_FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(),
_FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(),
_FieldDescriptor.CPPTYPE_DOUBLE: DoubleValueChecker(),
_FieldDescriptor.CPPTYPE_FLOAT: FloatValueChecker(),
_FieldDescriptor.CPPTYPE_BOOL: BoolValueChecker(),
_FieldDescriptor.CPPTYPE_STRING: TypeCheckerWithDefault(b'', bytes),
}
# Map from field type to a function F, such that F(field_num, value)
# gives the total byte size for a value of the given type. This
# byte size includes tag information and any other additional space
# associated with serializing "value".
TYPE_TO_BYTE_SIZE_FN = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize,
_FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize,
_FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize,
_FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize,
_FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize,
_FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize,
_FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize,
_FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize,
_FieldDescriptor.TYPE_STRING: wire_format.StringByteSize,
_FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize,
_FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize,
_FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize,
_FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize,
_FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize,
_FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize,
_FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize,
_FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize,
_FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize
}
# Maps from field types to encoder constructors.
TYPE_TO_ENCODER = {
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder,
_FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder,
_FieldDescriptor.TYPE_INT64: encoder.Int64Encoder,
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder,
_FieldDescriptor.TYPE_INT32: encoder.Int32Encoder,
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder,
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder,
_FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder,
_FieldDescriptor.TYPE_STRING: encoder.StringEncoder,
_FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder,
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder,
_FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder,
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder,
_FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder,
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder,
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder,
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder,
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder,
}
# Maps from field types to sizer constructors.
TYPE_TO_SIZER = {
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer,
_FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer,
_FieldDescriptor.TYPE_INT64: encoder.Int64Sizer,
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer,
_FieldDescriptor.TYPE_INT32: encoder.Int32Sizer,
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer,
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer,
_FieldDescriptor.TYPE_BOOL: encoder.BoolSizer,
_FieldDescriptor.TYPE_STRING: encoder.StringSizer,
_FieldDescriptor.TYPE_GROUP: encoder.GroupSizer,
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer,
_FieldDescriptor.TYPE_BYTES: encoder.BytesSizer,
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer,
_FieldDescriptor.TYPE_ENUM: encoder.EnumSizer,
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer,
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer,
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer,
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer,
}
# Maps from field type to a decoder constructor.
TYPE_TO_DECODER = {
_FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder,
_FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder,
_FieldDescriptor.TYPE_INT64: decoder.Int64Decoder,
_FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder,
_FieldDescriptor.TYPE_INT32: decoder.Int32Decoder,
_FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder,
_FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder,
_FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder,
_FieldDescriptor.TYPE_STRING: decoder.StringDecoder,
_FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder,
_FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder,
_FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder,
_FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder,
_FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder,
_FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder,
_FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder,
_FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder,
_FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder,
}
# Maps from field type to expected wiretype.
FIELD_TYPE_TO_WIRE_TYPE = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_STRING:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP,
_FieldDescriptor.TYPE_MESSAGE:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_BYTES:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT,
}

View File

@ -0,0 +1,878 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains well known classes.
This files defines well known classes which need extra maintenance including:
- Any
- Duration
- FieldMask
- Struct
- Timestamp
"""
__author__ = 'jieluo@google.com (Jie Luo)'
import calendar
import collections.abc
import datetime
from google.protobuf.descriptor import FieldDescriptor
_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S'
_NANOS_PER_SECOND = 1000000000
_NANOS_PER_MILLISECOND = 1000000
_NANOS_PER_MICROSECOND = 1000
_MILLIS_PER_SECOND = 1000
_MICROS_PER_SECOND = 1000000
_SECONDS_PER_DAY = 24 * 3600
_DURATION_SECONDS_MAX = 315576000000
class Any(object):
"""Class for Any Message type."""
__slots__ = ()
def Pack(self, msg, type_url_prefix='type.googleapis.com/',
deterministic=None):
"""Packs the specified message into current Any message."""
if len(type_url_prefix) < 1 or type_url_prefix[-1] != '/':
self.type_url = '%s/%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
else:
self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
self.value = msg.SerializeToString(deterministic=deterministic)
def Unpack(self, msg):
"""Unpacks the current Any message into specified message."""
descriptor = msg.DESCRIPTOR
if not self.Is(descriptor):
return False
msg.ParseFromString(self.value)
return True
def TypeName(self):
"""Returns the protobuf type name of the inner message."""
# Only last part is to be used: b/25630112
return self.type_url.split('/')[-1]
def Is(self, descriptor):
"""Checks if this Any represents the given protobuf type."""
return '/' in self.type_url and self.TypeName() == descriptor.full_name
_EPOCH_DATETIME_NAIVE = datetime.datetime.utcfromtimestamp(0)
_EPOCH_DATETIME_AWARE = datetime.datetime.fromtimestamp(
0, tz=datetime.timezone.utc)
class Timestamp(object):
"""Class for Timestamp message type."""
__slots__ = ()
def ToJsonString(self):
"""Converts Timestamp to RFC 3339 date string format.
Returns:
A string converted from timestamp. The string is always Z-normalized
and uses 3, 6 or 9 fractional digits as required to represent the
exact time. Example of the return format: '1972-01-01T10:00:20.021Z'
"""
nanos = self.nanos % _NANOS_PER_SECOND
total_sec = self.seconds + (self.nanos - nanos) // _NANOS_PER_SECOND
seconds = total_sec % _SECONDS_PER_DAY
days = (total_sec - seconds) // _SECONDS_PER_DAY
dt = datetime.datetime(1970, 1, 1) + datetime.timedelta(days, seconds)
result = dt.isoformat()
if (nanos % 1e9) == 0:
# If there are 0 fractional digits, the fractional
# point '.' should be omitted when serializing.
return result + 'Z'
if (nanos % 1e6) == 0:
# Serialize 3 fractional digits.
return result + '.%03dZ' % (nanos / 1e6)
if (nanos % 1e3) == 0:
# Serialize 6 fractional digits.
return result + '.%06dZ' % (nanos / 1e3)
# Serialize 9 fractional digits.
return result + '.%09dZ' % nanos
def FromJsonString(self, value):
"""Parse a RFC 3339 date string format to Timestamp.
Args:
value: A date string. Any fractional digits (or none) and any offset are
accepted as long as they fit into nano-seconds precision.
Example of accepted format: '1972-01-01T10:00:20.021-05:00'
Raises:
ValueError: On parsing problems.
"""
if not isinstance(value, str):
raise ValueError('Timestamp JSON value not a string: {!r}'.format(value))
timezone_offset = value.find('Z')
if timezone_offset == -1:
timezone_offset = value.find('+')
if timezone_offset == -1:
timezone_offset = value.rfind('-')
if timezone_offset == -1:
raise ValueError(
'Failed to parse timestamp: missing valid timezone offset.')
time_value = value[0:timezone_offset]
# Parse datetime and nanos.
point_position = time_value.find('.')
if point_position == -1:
second_value = time_value
nano_value = ''
else:
second_value = time_value[:point_position]
nano_value = time_value[point_position + 1:]
if 't' in second_value:
raise ValueError(
'time data \'{0}\' does not match format \'%Y-%m-%dT%H:%M:%S\', '
'lowercase \'t\' is not accepted'.format(second_value))
date_object = datetime.datetime.strptime(second_value, _TIMESTAMPFOMAT)
td = date_object - datetime.datetime(1970, 1, 1)
seconds = td.seconds + td.days * _SECONDS_PER_DAY
if len(nano_value) > 9:
raise ValueError(
'Failed to parse Timestamp: nanos {0} more than '
'9 fractional digits.'.format(nano_value))
if nano_value:
nanos = round(float('0.' + nano_value) * 1e9)
else:
nanos = 0
# Parse timezone offsets.
if value[timezone_offset] == 'Z':
if len(value) != timezone_offset + 1:
raise ValueError('Failed to parse timestamp: invalid trailing'
' data {0}.'.format(value))
else:
timezone = value[timezone_offset:]
pos = timezone.find(':')
if pos == -1:
raise ValueError(
'Invalid timezone offset value: {0}.'.format(timezone))
if timezone[0] == '+':
seconds -= (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60
else:
seconds += (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60
# Set seconds and nanos
self.seconds = int(seconds)
self.nanos = int(nanos)
def GetCurrentTime(self):
"""Get the current UTC into Timestamp."""
self.FromDatetime(datetime.datetime.utcnow())
def ToNanoseconds(self):
"""Converts Timestamp to nanoseconds since epoch."""
return self.seconds * _NANOS_PER_SECOND + self.nanos
def ToMicroseconds(self):
"""Converts Timestamp to microseconds since epoch."""
return (self.seconds * _MICROS_PER_SECOND +
self.nanos // _NANOS_PER_MICROSECOND)
def ToMilliseconds(self):
"""Converts Timestamp to milliseconds since epoch."""
return (self.seconds * _MILLIS_PER_SECOND +
self.nanos // _NANOS_PER_MILLISECOND)
def ToSeconds(self):
"""Converts Timestamp to seconds since epoch."""
return self.seconds
def FromNanoseconds(self, nanos):
"""Converts nanoseconds since epoch to Timestamp."""
self.seconds = nanos // _NANOS_PER_SECOND
self.nanos = nanos % _NANOS_PER_SECOND
def FromMicroseconds(self, micros):
"""Converts microseconds since epoch to Timestamp."""
self.seconds = micros // _MICROS_PER_SECOND
self.nanos = (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND
def FromMilliseconds(self, millis):
"""Converts milliseconds since epoch to Timestamp."""
self.seconds = millis // _MILLIS_PER_SECOND
self.nanos = (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND
def FromSeconds(self, seconds):
"""Converts seconds since epoch to Timestamp."""
self.seconds = seconds
self.nanos = 0
def ToDatetime(self, tzinfo=None):
"""Converts Timestamp to a datetime.
Args:
tzinfo: A datetime.tzinfo subclass; defaults to None.
Returns:
If tzinfo is None, returns a timezone-naive UTC datetime (with no timezone
information, i.e. not aware that it's UTC).
Otherwise, returns a timezone-aware datetime in the input timezone.
"""
delta = datetime.timedelta(
seconds=self.seconds,
microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND))
if tzinfo is None:
return _EPOCH_DATETIME_NAIVE + delta
else:
return _EPOCH_DATETIME_AWARE.astimezone(tzinfo) + delta
def FromDatetime(self, dt):
"""Converts datetime to Timestamp.
Args:
dt: A datetime. If it's timezone-naive, it's assumed to be in UTC.
"""
# Using this guide: http://wiki.python.org/moin/WorkingWithTime
# And this conversion guide: http://docs.python.org/library/time.html
# Turn the date parameter into a tuple (struct_time) that can then be
# manipulated into a long value of seconds. During the conversion from
# struct_time to long, the source date in UTC, and so it follows that the
# correct transformation is calendar.timegm()
self.seconds = calendar.timegm(dt.utctimetuple())
self.nanos = dt.microsecond * _NANOS_PER_MICROSECOND
class Duration(object):
"""Class for Duration message type."""
__slots__ = ()
def ToJsonString(self):
"""Converts Duration to string format.
Returns:
A string converted from self. The string format will contains
3, 6, or 9 fractional digits depending on the precision required to
represent the exact Duration value. For example: "1s", "1.010s",
"1.000000100s", "-3.100s"
"""
_CheckDurationValid(self.seconds, self.nanos)
if self.seconds < 0 or self.nanos < 0:
result = '-'
seconds = - self.seconds + int((0 - self.nanos) // 1e9)
nanos = (0 - self.nanos) % 1e9
else:
result = ''
seconds = self.seconds + int(self.nanos // 1e9)
nanos = self.nanos % 1e9
result += '%d' % seconds
if (nanos % 1e9) == 0:
# If there are 0 fractional digits, the fractional
# point '.' should be omitted when serializing.
return result + 's'
if (nanos % 1e6) == 0:
# Serialize 3 fractional digits.
return result + '.%03ds' % (nanos / 1e6)
if (nanos % 1e3) == 0:
# Serialize 6 fractional digits.
return result + '.%06ds' % (nanos / 1e3)
# Serialize 9 fractional digits.
return result + '.%09ds' % nanos
def FromJsonString(self, value):
"""Converts a string to Duration.
Args:
value: A string to be converted. The string must end with 's'. Any
fractional digits (or none) are accepted as long as they fit into
precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s
Raises:
ValueError: On parsing problems.
"""
if not isinstance(value, str):
raise ValueError('Duration JSON value not a string: {!r}'.format(value))
if len(value) < 1 or value[-1] != 's':
raise ValueError(
'Duration must end with letter "s": {0}.'.format(value))
try:
pos = value.find('.')
if pos == -1:
seconds = int(value[:-1])
nanos = 0
else:
seconds = int(value[:pos])
if value[0] == '-':
nanos = int(round(float('-0{0}'.format(value[pos: -1])) *1e9))
else:
nanos = int(round(float('0{0}'.format(value[pos: -1])) *1e9))
_CheckDurationValid(seconds, nanos)
self.seconds = seconds
self.nanos = nanos
except ValueError as e:
raise ValueError(
'Couldn\'t parse duration: {0} : {1}.'.format(value, e))
def ToNanoseconds(self):
"""Converts a Duration to nanoseconds."""
return self.seconds * _NANOS_PER_SECOND + self.nanos
def ToMicroseconds(self):
"""Converts a Duration to microseconds."""
micros = _RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND)
return self.seconds * _MICROS_PER_SECOND + micros
def ToMilliseconds(self):
"""Converts a Duration to milliseconds."""
millis = _RoundTowardZero(self.nanos, _NANOS_PER_MILLISECOND)
return self.seconds * _MILLIS_PER_SECOND + millis
def ToSeconds(self):
"""Converts a Duration to seconds."""
return self.seconds
def FromNanoseconds(self, nanos):
"""Converts nanoseconds to Duration."""
self._NormalizeDuration(nanos // _NANOS_PER_SECOND,
nanos % _NANOS_PER_SECOND)
def FromMicroseconds(self, micros):
"""Converts microseconds to Duration."""
self._NormalizeDuration(
micros // _MICROS_PER_SECOND,
(micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND)
def FromMilliseconds(self, millis):
"""Converts milliseconds to Duration."""
self._NormalizeDuration(
millis // _MILLIS_PER_SECOND,
(millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND)
def FromSeconds(self, seconds):
"""Converts seconds to Duration."""
self.seconds = seconds
self.nanos = 0
def ToTimedelta(self):
"""Converts Duration to timedelta."""
return datetime.timedelta(
seconds=self.seconds, microseconds=_RoundTowardZero(
self.nanos, _NANOS_PER_MICROSECOND))
def FromTimedelta(self, td):
"""Converts timedelta to Duration."""
self._NormalizeDuration(td.seconds + td.days * _SECONDS_PER_DAY,
td.microseconds * _NANOS_PER_MICROSECOND)
def _NormalizeDuration(self, seconds, nanos):
"""Set Duration by seconds and nanos."""
# Force nanos to be negative if the duration is negative.
if seconds < 0 and nanos > 0:
seconds += 1
nanos -= _NANOS_PER_SECOND
self.seconds = seconds
self.nanos = nanos
def _CheckDurationValid(seconds, nanos):
if seconds < -_DURATION_SECONDS_MAX or seconds > _DURATION_SECONDS_MAX:
raise ValueError(
'Duration is not valid: Seconds {0} must be in range '
'[-315576000000, 315576000000].'.format(seconds))
if nanos <= -_NANOS_PER_SECOND or nanos >= _NANOS_PER_SECOND:
raise ValueError(
'Duration is not valid: Nanos {0} must be in range '
'[-999999999, 999999999].'.format(nanos))
if (nanos < 0 and seconds > 0) or (nanos > 0 and seconds < 0):
raise ValueError(
'Duration is not valid: Sign mismatch.')
def _RoundTowardZero(value, divider):
"""Truncates the remainder part after division."""
# For some languages, the sign of the remainder is implementation
# dependent if any of the operands is negative. Here we enforce
# "rounded toward zero" semantics. For example, for (-5) / 2 an
# implementation may give -3 as the result with the remainder being
# 1. This function ensures we always return -2 (closer to zero).
result = value // divider
remainder = value % divider
if result < 0 and remainder > 0:
return result + 1
else:
return result
class FieldMask(object):
"""Class for FieldMask message type."""
__slots__ = ()
def ToJsonString(self):
"""Converts FieldMask to string according to proto3 JSON spec."""
camelcase_paths = []
for path in self.paths:
camelcase_paths.append(_SnakeCaseToCamelCase(path))
return ','.join(camelcase_paths)
def FromJsonString(self, value):
"""Converts string to FieldMask according to proto3 JSON spec."""
if not isinstance(value, str):
raise ValueError('FieldMask JSON value not a string: {!r}'.format(value))
self.Clear()
if value:
for path in value.split(','):
self.paths.append(_CamelCaseToSnakeCase(path))
def IsValidForDescriptor(self, message_descriptor):
"""Checks whether the FieldMask is valid for Message Descriptor."""
for path in self.paths:
if not _IsValidPath(message_descriptor, path):
return False
return True
def AllFieldsFromDescriptor(self, message_descriptor):
"""Gets all direct fields of Message Descriptor to FieldMask."""
self.Clear()
for field in message_descriptor.fields:
self.paths.append(field.name)
def CanonicalFormFromMask(self, mask):
"""Converts a FieldMask to the canonical form.
Removes paths that are covered by another path. For example,
"foo.bar" is covered by "foo" and will be removed if "foo"
is also in the FieldMask. Then sorts all paths in alphabetical order.
Args:
mask: The original FieldMask to be converted.
"""
tree = _FieldMaskTree(mask)
tree.ToFieldMask(self)
def Union(self, mask1, mask2):
"""Merges mask1 and mask2 into this FieldMask."""
_CheckFieldMaskMessage(mask1)
_CheckFieldMaskMessage(mask2)
tree = _FieldMaskTree(mask1)
tree.MergeFromFieldMask(mask2)
tree.ToFieldMask(self)
def Intersect(self, mask1, mask2):
"""Intersects mask1 and mask2 into this FieldMask."""
_CheckFieldMaskMessage(mask1)
_CheckFieldMaskMessage(mask2)
tree = _FieldMaskTree(mask1)
intersection = _FieldMaskTree()
for path in mask2.paths:
tree.IntersectPath(path, intersection)
intersection.ToFieldMask(self)
def MergeMessage(
self, source, destination,
replace_message_field=False, replace_repeated_field=False):
"""Merges fields specified in FieldMask from source to destination.
Args:
source: Source message.
destination: The destination message to be merged into.
replace_message_field: Replace message field if True. Merge message
field if False.
replace_repeated_field: Replace repeated field if True. Append
elements of repeated field if False.
"""
tree = _FieldMaskTree(self)
tree.MergeMessage(
source, destination, replace_message_field, replace_repeated_field)
def _IsValidPath(message_descriptor, path):
"""Checks whether the path is valid for Message Descriptor."""
parts = path.split('.')
last = parts.pop()
for name in parts:
field = message_descriptor.fields_by_name.get(name)
if (field is None or
field.label == FieldDescriptor.LABEL_REPEATED or
field.type != FieldDescriptor.TYPE_MESSAGE):
return False
message_descriptor = field.message_type
return last in message_descriptor.fields_by_name
def _CheckFieldMaskMessage(message):
"""Raises ValueError if message is not a FieldMask."""
message_descriptor = message.DESCRIPTOR
if (message_descriptor.name != 'FieldMask' or
message_descriptor.file.name != 'google/protobuf/field_mask.proto'):
raise ValueError('Message {0} is not a FieldMask.'.format(
message_descriptor.full_name))
def _SnakeCaseToCamelCase(path_name):
"""Converts a path name from snake_case to camelCase."""
result = []
after_underscore = False
for c in path_name:
if c.isupper():
raise ValueError(
'Fail to print FieldMask to Json string: Path name '
'{0} must not contain uppercase letters.'.format(path_name))
if after_underscore:
if c.islower():
result.append(c.upper())
after_underscore = False
else:
raise ValueError(
'Fail to print FieldMask to Json string: The '
'character after a "_" must be a lowercase letter '
'in path name {0}.'.format(path_name))
elif c == '_':
after_underscore = True
else:
result += c
if after_underscore:
raise ValueError('Fail to print FieldMask to Json string: Trailing "_" '
'in path name {0}.'.format(path_name))
return ''.join(result)
def _CamelCaseToSnakeCase(path_name):
"""Converts a field name from camelCase to snake_case."""
result = []
for c in path_name:
if c == '_':
raise ValueError('Fail to parse FieldMask: Path name '
'{0} must not contain "_"s.'.format(path_name))
if c.isupper():
result += '_'
result += c.lower()
else:
result += c
return ''.join(result)
class _FieldMaskTree(object):
"""Represents a FieldMask in a tree structure.
For example, given a FieldMask "foo.bar,foo.baz,bar.baz",
the FieldMaskTree will be:
[_root] -+- foo -+- bar
| |
| +- baz
|
+- bar --- baz
In the tree, each leaf node represents a field path.
"""
__slots__ = ('_root',)
def __init__(self, field_mask=None):
"""Initializes the tree by FieldMask."""
self._root = {}
if field_mask:
self.MergeFromFieldMask(field_mask)
def MergeFromFieldMask(self, field_mask):
"""Merges a FieldMask to the tree."""
for path in field_mask.paths:
self.AddPath(path)
def AddPath(self, path):
"""Adds a field path into the tree.
If the field path to add is a sub-path of an existing field path
in the tree (i.e., a leaf node), it means the tree already matches
the given path so nothing will be added to the tree. If the path
matches an existing non-leaf node in the tree, that non-leaf node
will be turned into a leaf node with all its children removed because
the path matches all the node's children. Otherwise, a new path will
be added.
Args:
path: The field path to add.
"""
node = self._root
for name in path.split('.'):
if name not in node:
node[name] = {}
elif not node[name]:
# Pre-existing empty node implies we already have this entire tree.
return
node = node[name]
# Remove any sub-trees we might have had.
node.clear()
def ToFieldMask(self, field_mask):
"""Converts the tree to a FieldMask."""
field_mask.Clear()
_AddFieldPaths(self._root, '', field_mask)
def IntersectPath(self, path, intersection):
"""Calculates the intersection part of a field path with this tree.
Args:
path: The field path to calculates.
intersection: The out tree to record the intersection part.
"""
node = self._root
for name in path.split('.'):
if name not in node:
return
elif not node[name]:
intersection.AddPath(path)
return
node = node[name]
intersection.AddLeafNodes(path, node)
def AddLeafNodes(self, prefix, node):
"""Adds leaf nodes begin with prefix to this tree."""
if not node:
self.AddPath(prefix)
for name in node:
child_path = prefix + '.' + name
self.AddLeafNodes(child_path, node[name])
def MergeMessage(
self, source, destination,
replace_message, replace_repeated):
"""Merge all fields specified by this tree from source to destination."""
_MergeMessage(
self._root, source, destination, replace_message, replace_repeated)
def _StrConvert(value):
"""Converts value to str if it is not."""
# This file is imported by c extension and some methods like ClearField
# requires string for the field name. py2/py3 has different text
# type and may use unicode.
if not isinstance(value, str):
return value.encode('utf-8')
return value
def _MergeMessage(
node, source, destination, replace_message, replace_repeated):
"""Merge all fields specified by a sub-tree from source to destination."""
source_descriptor = source.DESCRIPTOR
for name in node:
child = node[name]
field = source_descriptor.fields_by_name[name]
if field is None:
raise ValueError('Error: Can\'t find field {0} in message {1}.'.format(
name, source_descriptor.full_name))
if child:
# Sub-paths are only allowed for singular message fields.
if (field.label == FieldDescriptor.LABEL_REPEATED or
field.cpp_type != FieldDescriptor.CPPTYPE_MESSAGE):
raise ValueError('Error: Field {0} in message {1} is not a singular '
'message field and cannot have sub-fields.'.format(
name, source_descriptor.full_name))
if source.HasField(name):
_MergeMessage(
child, getattr(source, name), getattr(destination, name),
replace_message, replace_repeated)
continue
if field.label == FieldDescriptor.LABEL_REPEATED:
if replace_repeated:
destination.ClearField(_StrConvert(name))
repeated_source = getattr(source, name)
repeated_destination = getattr(destination, name)
repeated_destination.MergeFrom(repeated_source)
else:
if field.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
if replace_message:
destination.ClearField(_StrConvert(name))
if source.HasField(name):
getattr(destination, name).MergeFrom(getattr(source, name))
else:
setattr(destination, name, getattr(source, name))
def _AddFieldPaths(node, prefix, field_mask):
"""Adds the field paths descended from node to field_mask."""
if not node and prefix:
field_mask.paths.append(prefix)
return
for name in sorted(node):
if prefix:
child_path = prefix + '.' + name
else:
child_path = name
_AddFieldPaths(node[name], child_path, field_mask)
def _SetStructValue(struct_value, value):
if value is None:
struct_value.null_value = 0
elif isinstance(value, bool):
# Note: this check must come before the number check because in Python
# True and False are also considered numbers.
struct_value.bool_value = value
elif isinstance(value, str):
struct_value.string_value = value
elif isinstance(value, (int, float)):
struct_value.number_value = value
elif isinstance(value, (dict, Struct)):
struct_value.struct_value.Clear()
struct_value.struct_value.update(value)
elif isinstance(value, (list, ListValue)):
struct_value.list_value.Clear()
struct_value.list_value.extend(value)
else:
raise ValueError('Unexpected type')
def _GetStructValue(struct_value):
which = struct_value.WhichOneof('kind')
if which == 'struct_value':
return struct_value.struct_value
elif which == 'null_value':
return None
elif which == 'number_value':
return struct_value.number_value
elif which == 'string_value':
return struct_value.string_value
elif which == 'bool_value':
return struct_value.bool_value
elif which == 'list_value':
return struct_value.list_value
elif which is None:
raise ValueError('Value not set')
class Struct(object):
"""Class for Struct message type."""
__slots__ = ()
def __getitem__(self, key):
return _GetStructValue(self.fields[key])
def __contains__(self, item):
return item in self.fields
def __setitem__(self, key, value):
_SetStructValue(self.fields[key], value)
def __delitem__(self, key):
del self.fields[key]
def __len__(self):
return len(self.fields)
def __iter__(self):
return iter(self.fields)
def keys(self): # pylint: disable=invalid-name
return self.fields.keys()
def values(self): # pylint: disable=invalid-name
return [self[key] for key in self]
def items(self): # pylint: disable=invalid-name
return [(key, self[key]) for key in self]
def get_or_create_list(self, key):
"""Returns a list for this key, creating if it didn't exist already."""
if not self.fields[key].HasField('list_value'):
# Clear will mark list_value modified which will indeed create a list.
self.fields[key].list_value.Clear()
return self.fields[key].list_value
def get_or_create_struct(self, key):
"""Returns a struct for this key, creating if it didn't exist already."""
if not self.fields[key].HasField('struct_value'):
# Clear will mark struct_value modified which will indeed create a struct.
self.fields[key].struct_value.Clear()
return self.fields[key].struct_value
def update(self, dictionary): # pylint: disable=invalid-name
for key, value in dictionary.items():
_SetStructValue(self.fields[key], value)
collections.abc.MutableMapping.register(Struct)
class ListValue(object):
"""Class for ListValue message type."""
__slots__ = ()
def __len__(self):
return len(self.values)
def append(self, value):
_SetStructValue(self.values.add(), value)
def extend(self, elem_seq):
for value in elem_seq:
self.append(value)
def __getitem__(self, index):
"""Retrieves item by the specified index."""
return _GetStructValue(self.values.__getitem__(index))
def __setitem__(self, index, value):
_SetStructValue(self.values.__getitem__(index), value)
def __delitem__(self, key):
del self.values[key]
def items(self):
for i in range(len(self)):
yield self[i]
def add_struct(self):
"""Appends and returns a struct value as the next value in the list."""
struct_value = self.values.add().struct_value
# Clear will mark struct_value modified which will indeed create a struct.
struct_value.Clear()
return struct_value
def add_list(self):
"""Appends and returns a list value as the next value in the list."""
list_value = self.values.add().list_value
# Clear will mark list_value modified which will indeed create a list.
list_value.Clear()
return list_value
collections.abc.MutableSequence.register(ListValue)
WKTBASES = {
'google.protobuf.Any': Any,
'google.protobuf.Duration': Duration,
'google.protobuf.FieldMask': FieldMask,
'google.protobuf.ListValue': ListValue,
'google.protobuf.Struct': Struct,
'google.protobuf.Timestamp': Timestamp,
}

View File

@ -0,0 +1,268 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Constants and static functions to support protocol buffer wire format."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import descriptor
from google.protobuf import message
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
# These numbers identify the wire type of a protocol buffer value.
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
# tag-and-type to store one of these WIRETYPE_* constants.
# These values must match WireType enum in google/protobuf/wire_format.h.
WIRETYPE_VARINT = 0
WIRETYPE_FIXED64 = 1
WIRETYPE_LENGTH_DELIMITED = 2
WIRETYPE_START_GROUP = 3
WIRETYPE_END_GROUP = 4
WIRETYPE_FIXED32 = 5
_WIRETYPE_MAX = 5
# Bounds for various integer types.
INT32_MAX = int((1 << 31) - 1)
INT32_MIN = int(-(1 << 31))
UINT32_MAX = (1 << 32) - 1
INT64_MAX = (1 << 63) - 1
INT64_MIN = -(1 << 63)
UINT64_MAX = (1 << 64) - 1
# "struct" format strings that will encode/decode the specified formats.
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
FORMAT_FLOAT_LITTLE_ENDIAN = '<f'
FORMAT_DOUBLE_LITTLE_ENDIAN = '<d'
# We'll have to provide alternate implementations of AppendLittleEndian*() on
# any architectures where these checks fail.
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
raise AssertionError('Format "I" is not a 32-bit number.')
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
raise AssertionError('Format "Q" is not a 64-bit number.')
def PackTag(field_number, wire_type):
"""Returns an unsigned 32-bit integer that encodes the field number and
wire type information in standard protocol message wire format.
Args:
field_number: Expected to be an integer in the range [1, 1 << 29)
wire_type: One of the WIRETYPE_* constants.
"""
if not 0 <= wire_type <= _WIRETYPE_MAX:
raise message.EncodeError('Unknown wire type: %d' % wire_type)
return (field_number << TAG_TYPE_BITS) | wire_type
def UnpackTag(tag):
"""The inverse of PackTag(). Given an unsigned 32-bit number,
returns a (field_number, wire_type) tuple.
"""
return (tag >> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK)
def ZigZagEncode(value):
"""ZigZag Transform: Encodes signed integers so that they can be
effectively used with varint encoding. See wire_format.h for
more details.
"""
if value >= 0:
return value << 1
return (value << 1) ^ (~0)
def ZigZagDecode(value):
"""Inverse of ZigZagEncode()."""
if not value & 0x1:
return value >> 1
return (value >> 1) ^ (~0)
# The *ByteSize() functions below return the number of bytes required to
# serialize "field number + type" information and then serialize the value.
def Int32ByteSize(field_number, int32):
return Int64ByteSize(field_number, int32)
def Int32ByteSizeNoTag(int32):
return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32)
def Int64ByteSize(field_number, int64):
# Have to convert to uint before calling UInt64ByteSize().
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
def UInt32ByteSize(field_number, uint32):
return UInt64ByteSize(field_number, uint32)
def UInt64ByteSize(field_number, uint64):
return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
def SInt32ByteSize(field_number, int32):
return UInt32ByteSize(field_number, ZigZagEncode(int32))
def SInt64ByteSize(field_number, int64):
return UInt64ByteSize(field_number, ZigZagEncode(int64))
def Fixed32ByteSize(field_number, fixed32):
return TagByteSize(field_number) + 4
def Fixed64ByteSize(field_number, fixed64):
return TagByteSize(field_number) + 8
def SFixed32ByteSize(field_number, sfixed32):
return TagByteSize(field_number) + 4
def SFixed64ByteSize(field_number, sfixed64):
return TagByteSize(field_number) + 8
def FloatByteSize(field_number, flt):
return TagByteSize(field_number) + 4
def DoubleByteSize(field_number, double):
return TagByteSize(field_number) + 8
def BoolByteSize(field_number, b):
return TagByteSize(field_number) + 1
def EnumByteSize(field_number, enum):
return UInt32ByteSize(field_number, enum)
def StringByteSize(field_number, string):
return BytesByteSize(field_number, string.encode('utf-8'))
def BytesByteSize(field_number, b):
return (TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(len(b))
+ len(b))
def GroupByteSize(field_number, message):
return (2 * TagByteSize(field_number) # START and END group.
+ message.ByteSize())
def MessageByteSize(field_number, message):
return (TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
+ message.ByteSize())
def MessageSetItemByteSize(field_number, msg):
# First compute the sizes of the tags.
# There are 2 tags for the beginning and ending of the repeated group, that
# is field number 1, one with field number 2 (type_id) and one with field
# number 3 (message).
total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3))
# Add the number of bytes for type_id.
total_size += _VarUInt64ByteSizeNoTag(field_number)
message_size = msg.ByteSize()
# The number of bytes for encoding the length of the message.
total_size += _VarUInt64ByteSizeNoTag(message_size)
# The size of the message.
total_size += message_size
return total_size
def TagByteSize(field_number):
"""Returns the bytes required to serialize a tag with this field number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
# Private helper function for the *ByteSize() functions above.
def _VarUInt64ByteSizeNoTag(uint64):
"""Returns the number of bytes required to serialize a single varint
using boundary value comparisons. (unrolled loop optimization -WPierce)
uint64 must be unsigned.
"""
if uint64 <= 0x7f: return 1
if uint64 <= 0x3fff: return 2
if uint64 <= 0x1fffff: return 3
if uint64 <= 0xfffffff: return 4
if uint64 <= 0x7ffffffff: return 5
if uint64 <= 0x3ffffffffff: return 6
if uint64 <= 0x1ffffffffffff: return 7
if uint64 <= 0xffffffffffffff: return 8
if uint64 <= 0x7fffffffffffffff: return 9
if uint64 > UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % uint64)
return 10
NON_PACKABLE_TYPES = (
descriptor.FieldDescriptor.TYPE_STRING,
descriptor.FieldDescriptor.TYPE_GROUP,
descriptor.FieldDescriptor.TYPE_MESSAGE,
descriptor.FieldDescriptor.TYPE_BYTES
)
def IsTypePackable(field_type):
"""Return true iff packable = true is valid for fields of this type.
Args:
field_type: a FieldDescriptor::Type value.
Returns:
True iff fields of this type are packable.
"""
return field_type not in NON_PACKABLE_TYPES

View File

@ -0,0 +1,912 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains routines for printing protocol messages in JSON format.
Simple usage example:
# Create a proto object and serialize it to a json format string.
message = my_proto_pb2.MyMessage(foo='bar')
json_string = json_format.MessageToJson(message)
# Parse a json format string to proto object.
message = json_format.Parse(json_string, my_proto_pb2.MyMessage())
"""
__author__ = 'jieluo@google.com (Jie Luo)'
import base64
from collections import OrderedDict
import json
import math
from operator import methodcaller
import re
import sys
from google.protobuf.internal import type_checkers
from google.protobuf import descriptor
from google.protobuf import symbol_database
_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S'
_INT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT32,
descriptor.FieldDescriptor.CPPTYPE_UINT32,
descriptor.FieldDescriptor.CPPTYPE_INT64,
descriptor.FieldDescriptor.CPPTYPE_UINT64])
_INT64_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT64,
descriptor.FieldDescriptor.CPPTYPE_UINT64])
_FLOAT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_FLOAT,
descriptor.FieldDescriptor.CPPTYPE_DOUBLE])
_INFINITY = 'Infinity'
_NEG_INFINITY = '-Infinity'
_NAN = 'NaN'
_UNPAIRED_SURROGATE_PATTERN = re.compile(
u'[\ud800-\udbff](?![\udc00-\udfff])|(?<![\ud800-\udbff])[\udc00-\udfff]')
_VALID_EXTENSION_NAME = re.compile(r'\[[a-zA-Z0-9\._]*\]$')
class Error(Exception):
"""Top-level module error for json_format."""
class SerializeToJsonError(Error):
"""Thrown if serialization to JSON fails."""
class ParseError(Error):
"""Thrown in case of parsing error."""
def MessageToJson(
message,
including_default_value_fields=False,
preserving_proto_field_name=False,
indent=2,
sort_keys=False,
use_integers_for_enums=False,
descriptor_pool=None,
float_precision=None,
ensure_ascii=True):
"""Converts protobuf message to JSON format.
Args:
message: The protocol buffers message instance to serialize.
including_default_value_fields: If True, singular primitive fields,
repeated fields, and map fields will always be serialized. If
False, only serialize non-empty fields. Singular message fields
and oneof fields are not affected by this option.
preserving_proto_field_name: If True, use the original proto field
names as defined in the .proto file. If False, convert the field
names to lowerCamelCase.
indent: The JSON object will be pretty-printed with this indent level.
An indent level of 0 or negative will only insert newlines.
sort_keys: If True, then the output will be sorted by field names.
use_integers_for_enums: If true, print integers instead of enum names.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
float_precision: If set, use this to specify float field valid digits.
ensure_ascii: If True, strings with non-ASCII characters are escaped.
If False, Unicode strings are returned unchanged.
Returns:
A string containing the JSON formatted protocol buffer message.
"""
printer = _Printer(
including_default_value_fields,
preserving_proto_field_name,
use_integers_for_enums,
descriptor_pool,
float_precision=float_precision)
return printer.ToJsonString(message, indent, sort_keys, ensure_ascii)
def MessageToDict(
message,
including_default_value_fields=False,
preserving_proto_field_name=False,
use_integers_for_enums=False,
descriptor_pool=None,
float_precision=None):
"""Converts protobuf message to a dictionary.
When the dictionary is encoded to JSON, it conforms to proto3 JSON spec.
Args:
message: The protocol buffers message instance to serialize.
including_default_value_fields: If True, singular primitive fields,
repeated fields, and map fields will always be serialized. If
False, only serialize non-empty fields. Singular message fields
and oneof fields are not affected by this option.
preserving_proto_field_name: If True, use the original proto field
names as defined in the .proto file. If False, convert the field
names to lowerCamelCase.
use_integers_for_enums: If true, print integers instead of enum names.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
float_precision: If set, use this to specify float field valid digits.
Returns:
A dict representation of the protocol buffer message.
"""
printer = _Printer(
including_default_value_fields,
preserving_proto_field_name,
use_integers_for_enums,
descriptor_pool,
float_precision=float_precision)
# pylint: disable=protected-access
return printer._MessageToJsonObject(message)
def _IsMapEntry(field):
return (field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and
field.message_type.has_options and
field.message_type.GetOptions().map_entry)
class _Printer(object):
"""JSON format printer for protocol message."""
def __init__(
self,
including_default_value_fields=False,
preserving_proto_field_name=False,
use_integers_for_enums=False,
descriptor_pool=None,
float_precision=None):
self.including_default_value_fields = including_default_value_fields
self.preserving_proto_field_name = preserving_proto_field_name
self.use_integers_for_enums = use_integers_for_enums
self.descriptor_pool = descriptor_pool
if float_precision:
self.float_format = '.{}g'.format(float_precision)
else:
self.float_format = None
def ToJsonString(self, message, indent, sort_keys, ensure_ascii):
js = self._MessageToJsonObject(message)
return json.dumps(
js, indent=indent, sort_keys=sort_keys, ensure_ascii=ensure_ascii)
def _MessageToJsonObject(self, message):
"""Converts message to an object according to Proto3 JSON Specification."""
message_descriptor = message.DESCRIPTOR
full_name = message_descriptor.full_name
if _IsWrapperMessage(message_descriptor):
return self._WrapperMessageToJsonObject(message)
if full_name in _WKTJSONMETHODS:
return methodcaller(_WKTJSONMETHODS[full_name][0], message)(self)
js = {}
return self._RegularMessageToJsonObject(message, js)
def _RegularMessageToJsonObject(self, message, js):
"""Converts normal message according to Proto3 JSON Specification."""
fields = message.ListFields()
try:
for field, value in fields:
if self.preserving_proto_field_name:
name = field.name
else:
name = field.json_name
if _IsMapEntry(field):
# Convert a map field.
v_field = field.message_type.fields_by_name['value']
js_map = {}
for key in value:
if isinstance(key, bool):
if key:
recorded_key = 'true'
else:
recorded_key = 'false'
else:
recorded_key = str(key)
js_map[recorded_key] = self._FieldToJsonObject(
v_field, value[key])
js[name] = js_map
elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
# Convert a repeated field.
js[name] = [self._FieldToJsonObject(field, k)
for k in value]
elif field.is_extension:
name = '[%s]' % field.full_name
js[name] = self._FieldToJsonObject(field, value)
else:
js[name] = self._FieldToJsonObject(field, value)
# Serialize default value if including_default_value_fields is True.
if self.including_default_value_fields:
message_descriptor = message.DESCRIPTOR
for field in message_descriptor.fields:
# Singular message fields and oneof fields will not be affected.
if ((field.label != descriptor.FieldDescriptor.LABEL_REPEATED and
field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE) or
field.containing_oneof):
continue
if self.preserving_proto_field_name:
name = field.name
else:
name = field.json_name
if name in js:
# Skip the field which has been serialized already.
continue
if _IsMapEntry(field):
js[name] = {}
elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
js[name] = []
else:
js[name] = self._FieldToJsonObject(field, field.default_value)
except ValueError as e:
raise SerializeToJsonError(
'Failed to serialize {0} field: {1}.'.format(field.name, e))
return js
def _FieldToJsonObject(self, field, value):
"""Converts field value according to Proto3 JSON Specification."""
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
return self._MessageToJsonObject(value)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:
if self.use_integers_for_enums:
return value
if field.enum_type.full_name == 'google.protobuf.NullValue':
return None
enum_value = field.enum_type.values_by_number.get(value, None)
if enum_value is not None:
return enum_value.name
else:
if field.file.syntax == 'proto3':
return value
raise SerializeToJsonError('Enum field contains an integer value '
'which can not mapped to an enum value.')
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:
if field.type == descriptor.FieldDescriptor.TYPE_BYTES:
# Use base64 Data encoding for bytes
return base64.b64encode(value).decode('utf-8')
else:
return value
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:
return bool(value)
elif field.cpp_type in _INT64_TYPES:
return str(value)
elif field.cpp_type in _FLOAT_TYPES:
if math.isinf(value):
if value < 0.0:
return _NEG_INFINITY
else:
return _INFINITY
if math.isnan(value):
return _NAN
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT:
if self.float_format:
return float(format(value, self.float_format))
else:
return type_checkers.ToShortestFloat(value)
return value
def _AnyMessageToJsonObject(self, message):
"""Converts Any message according to Proto3 JSON Specification."""
if not message.ListFields():
return {}
# Must print @type first, use OrderedDict instead of {}
js = OrderedDict()
type_url = message.type_url
js['@type'] = type_url
sub_message = _CreateMessageFromTypeUrl(type_url, self.descriptor_pool)
sub_message.ParseFromString(message.value)
message_descriptor = sub_message.DESCRIPTOR
full_name = message_descriptor.full_name
if _IsWrapperMessage(message_descriptor):
js['value'] = self._WrapperMessageToJsonObject(sub_message)
return js
if full_name in _WKTJSONMETHODS:
js['value'] = methodcaller(_WKTJSONMETHODS[full_name][0],
sub_message)(self)
return js
return self._RegularMessageToJsonObject(sub_message, js)
def _GenericMessageToJsonObject(self, message):
"""Converts message according to Proto3 JSON Specification."""
# Duration, Timestamp and FieldMask have ToJsonString method to do the
# convert. Users can also call the method directly.
return message.ToJsonString()
def _ValueMessageToJsonObject(self, message):
"""Converts Value message according to Proto3 JSON Specification."""
which = message.WhichOneof('kind')
# If the Value message is not set treat as null_value when serialize
# to JSON. The parse back result will be different from original message.
if which is None or which == 'null_value':
return None
if which == 'list_value':
return self._ListValueMessageToJsonObject(message.list_value)
if which == 'struct_value':
value = message.struct_value
else:
value = getattr(message, which)
oneof_descriptor = message.DESCRIPTOR.fields_by_name[which]
return self._FieldToJsonObject(oneof_descriptor, value)
def _ListValueMessageToJsonObject(self, message):
"""Converts ListValue message according to Proto3 JSON Specification."""
return [self._ValueMessageToJsonObject(value)
for value in message.values]
def _StructMessageToJsonObject(self, message):
"""Converts Struct message according to Proto3 JSON Specification."""
fields = message.fields
ret = {}
for key in fields:
ret[key] = self._ValueMessageToJsonObject(fields[key])
return ret
def _WrapperMessageToJsonObject(self, message):
return self._FieldToJsonObject(
message.DESCRIPTOR.fields_by_name['value'], message.value)
def _IsWrapperMessage(message_descriptor):
return message_descriptor.file.name == 'google/protobuf/wrappers.proto'
def _DuplicateChecker(js):
result = {}
for name, value in js:
if name in result:
raise ParseError('Failed to load JSON: duplicate key {0}.'.format(name))
result[name] = value
return result
def _CreateMessageFromTypeUrl(type_url, descriptor_pool):
"""Creates a message from a type URL."""
db = symbol_database.Default()
pool = db.pool if descriptor_pool is None else descriptor_pool
type_name = type_url.split('/')[-1]
try:
message_descriptor = pool.FindMessageTypeByName(type_name)
except KeyError:
raise TypeError(
'Can not find message descriptor by type_url: {0}'.format(type_url))
message_class = db.GetPrototype(message_descriptor)
return message_class()
def Parse(text,
message,
ignore_unknown_fields=False,
descriptor_pool=None,
max_recursion_depth=100):
"""Parses a JSON representation of a protocol message into a message.
Args:
text: Message JSON representation.
message: A protocol buffer message to merge into.
ignore_unknown_fields: If True, do not raise errors for unknown fields.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
max_recursion_depth: max recursion depth of JSON message to be
deserialized. JSON messages over this depth will fail to be
deserialized. Default value is 100.
Returns:
The same message passed as argument.
Raises::
ParseError: On JSON parsing problems.
"""
if not isinstance(text, str):
text = text.decode('utf-8')
try:
js = json.loads(text, object_pairs_hook=_DuplicateChecker)
except ValueError as e:
raise ParseError('Failed to load JSON: {0}.'.format(str(e)))
return ParseDict(js, message, ignore_unknown_fields, descriptor_pool,
max_recursion_depth)
def ParseDict(js_dict,
message,
ignore_unknown_fields=False,
descriptor_pool=None,
max_recursion_depth=100):
"""Parses a JSON dictionary representation into a message.
Args:
js_dict: Dict representation of a JSON message.
message: A protocol buffer message to merge into.
ignore_unknown_fields: If True, do not raise errors for unknown fields.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
max_recursion_depth: max recursion depth of JSON message to be
deserialized. JSON messages over this depth will fail to be
deserialized. Default value is 100.
Returns:
The same message passed as argument.
"""
parser = _Parser(ignore_unknown_fields, descriptor_pool, max_recursion_depth)
parser.ConvertMessage(js_dict, message, '')
return message
_INT_OR_FLOAT = (int, float)
class _Parser(object):
"""JSON format parser for protocol message."""
def __init__(self, ignore_unknown_fields, descriptor_pool,
max_recursion_depth):
self.ignore_unknown_fields = ignore_unknown_fields
self.descriptor_pool = descriptor_pool
self.max_recursion_depth = max_recursion_depth
self.recursion_depth = 0
def ConvertMessage(self, value, message, path):
"""Convert a JSON object into a message.
Args:
value: A JSON object.
message: A WKT or regular protocol message to record the data.
path: parent path to log parse error info.
Raises:
ParseError: In case of convert problems.
"""
self.recursion_depth += 1
if self.recursion_depth > self.max_recursion_depth:
raise ParseError('Message too deep. Max recursion depth is {0}'.format(
self.max_recursion_depth))
message_descriptor = message.DESCRIPTOR
full_name = message_descriptor.full_name
if not path:
path = message_descriptor.name
if _IsWrapperMessage(message_descriptor):
self._ConvertWrapperMessage(value, message, path)
elif full_name in _WKTJSONMETHODS:
methodcaller(_WKTJSONMETHODS[full_name][1], value, message, path)(self)
else:
self._ConvertFieldValuePair(value, message, path)
self.recursion_depth -= 1
def _ConvertFieldValuePair(self, js, message, path):
"""Convert field value pairs into regular message.
Args:
js: A JSON object to convert the field value pairs.
message: A regular protocol message to record the data.
path: parent path to log parse error info.
Raises:
ParseError: In case of problems converting.
"""
names = []
message_descriptor = message.DESCRIPTOR
fields_by_json_name = dict((f.json_name, f)
for f in message_descriptor.fields)
for name in js:
try:
field = fields_by_json_name.get(name, None)
if not field:
field = message_descriptor.fields_by_name.get(name, None)
if not field and _VALID_EXTENSION_NAME.match(name):
if not message_descriptor.is_extendable:
raise ParseError(
'Message type {0} does not have extensions at {1}'.format(
message_descriptor.full_name, path))
identifier = name[1:-1] # strip [] brackets
# pylint: disable=protected-access
field = message.Extensions._FindExtensionByName(identifier)
# pylint: enable=protected-access
if not field:
# Try looking for extension by the message type name, dropping the
# field name following the final . separator in full_name.
identifier = '.'.join(identifier.split('.')[:-1])
# pylint: disable=protected-access
field = message.Extensions._FindExtensionByName(identifier)
# pylint: enable=protected-access
if not field:
if self.ignore_unknown_fields:
continue
raise ParseError(
('Message type "{0}" has no field named "{1}" at "{2}".\n'
' Available Fields(except extensions): "{3}"').format(
message_descriptor.full_name, name, path,
[f.json_name for f in message_descriptor.fields]))
if name in names:
raise ParseError('Message type "{0}" should not have multiple '
'"{1}" fields at "{2}".'.format(
message.DESCRIPTOR.full_name, name, path))
names.append(name)
value = js[name]
# Check no other oneof field is parsed.
if field.containing_oneof is not None and value is not None:
oneof_name = field.containing_oneof.name
if oneof_name in names:
raise ParseError('Message type "{0}" should not have multiple '
'"{1}" oneof fields at "{2}".'.format(
message.DESCRIPTOR.full_name, oneof_name,
path))
names.append(oneof_name)
if value is None:
if (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE
and field.message_type.full_name == 'google.protobuf.Value'):
sub_message = getattr(message, field.name)
sub_message.null_value = 0
elif (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM
and field.enum_type.full_name == 'google.protobuf.NullValue'):
setattr(message, field.name, 0)
else:
message.ClearField(field.name)
continue
# Parse field value.
if _IsMapEntry(field):
message.ClearField(field.name)
self._ConvertMapFieldValue(value, message, field,
'{0}.{1}'.format(path, name))
elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
message.ClearField(field.name)
if not isinstance(value, list):
raise ParseError('repeated field {0} must be in [] which is '
'{1} at {2}'.format(name, value, path))
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
# Repeated message field.
for index, item in enumerate(value):
sub_message = getattr(message, field.name).add()
# None is a null_value in Value.
if (item is None and
sub_message.DESCRIPTOR.full_name != 'google.protobuf.Value'):
raise ParseError('null is not allowed to be used as an element'
' in a repeated field at {0}.{1}[{2}]'.format(
path, name, index))
self.ConvertMessage(item, sub_message,
'{0}.{1}[{2}]'.format(path, name, index))
else:
# Repeated scalar field.
for index, item in enumerate(value):
if item is None:
raise ParseError('null is not allowed to be used as an element'
' in a repeated field at {0}.{1}[{2}]'.format(
path, name, index))
getattr(message, field.name).append(
_ConvertScalarFieldValue(
item, field, '{0}.{1}[{2}]'.format(path, name, index)))
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
if field.is_extension:
sub_message = message.Extensions[field]
else:
sub_message = getattr(message, field.name)
sub_message.SetInParent()
self.ConvertMessage(value, sub_message, '{0}.{1}'.format(path, name))
else:
if field.is_extension:
message.Extensions[field] = _ConvertScalarFieldValue(
value, field, '{0}.{1}'.format(path, name))
else:
setattr(
message, field.name,
_ConvertScalarFieldValue(value, field,
'{0}.{1}'.format(path, name)))
except ParseError as e:
if field and field.containing_oneof is None:
raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
else:
raise ParseError(str(e))
except ValueError as e:
raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
except TypeError as e:
raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
def _ConvertAnyMessage(self, value, message, path):
"""Convert a JSON representation into Any message."""
if isinstance(value, dict) and not value:
return
try:
type_url = value['@type']
except KeyError:
raise ParseError(
'@type is missing when parsing any message at {0}'.format(path))
try:
sub_message = _CreateMessageFromTypeUrl(type_url, self.descriptor_pool)
except TypeError as e:
raise ParseError('{0} at {1}'.format(e, path))
message_descriptor = sub_message.DESCRIPTOR
full_name = message_descriptor.full_name
if _IsWrapperMessage(message_descriptor):
self._ConvertWrapperMessage(value['value'], sub_message,
'{0}.value'.format(path))
elif full_name in _WKTJSONMETHODS:
methodcaller(_WKTJSONMETHODS[full_name][1], value['value'], sub_message,
'{0}.value'.format(path))(
self)
else:
del value['@type']
self._ConvertFieldValuePair(value, sub_message, path)
value['@type'] = type_url
# Sets Any message
message.value = sub_message.SerializeToString()
message.type_url = type_url
def _ConvertGenericMessage(self, value, message, path):
"""Convert a JSON representation into message with FromJsonString."""
# Duration, Timestamp, FieldMask have a FromJsonString method to do the
# conversion. Users can also call the method directly.
try:
message.FromJsonString(value)
except ValueError as e:
raise ParseError('{0} at {1}'.format(e, path))
def _ConvertValueMessage(self, value, message, path):
"""Convert a JSON representation into Value message."""
if isinstance(value, dict):
self._ConvertStructMessage(value, message.struct_value, path)
elif isinstance(value, list):
self._ConvertListValueMessage(value, message.list_value, path)
elif value is None:
message.null_value = 0
elif isinstance(value, bool):
message.bool_value = value
elif isinstance(value, str):
message.string_value = value
elif isinstance(value, _INT_OR_FLOAT):
message.number_value = value
else:
raise ParseError('Value {0} has unexpected type {1} at {2}'.format(
value, type(value), path))
def _ConvertListValueMessage(self, value, message, path):
"""Convert a JSON representation into ListValue message."""
if not isinstance(value, list):
raise ParseError('ListValue must be in [] which is {0} at {1}'.format(
value, path))
message.ClearField('values')
for index, item in enumerate(value):
self._ConvertValueMessage(item, message.values.add(),
'{0}[{1}]'.format(path, index))
def _ConvertStructMessage(self, value, message, path):
"""Convert a JSON representation into Struct message."""
if not isinstance(value, dict):
raise ParseError('Struct must be in a dict which is {0} at {1}'.format(
value, path))
# Clear will mark the struct as modified so it will be created even if
# there are no values.
message.Clear()
for key in value:
self._ConvertValueMessage(value[key], message.fields[key],
'{0}.{1}'.format(path, key))
return
def _ConvertWrapperMessage(self, value, message, path):
"""Convert a JSON representation into Wrapper message."""
field = message.DESCRIPTOR.fields_by_name['value']
setattr(
message, 'value',
_ConvertScalarFieldValue(value, field, path='{0}.value'.format(path)))
def _ConvertMapFieldValue(self, value, message, field, path):
"""Convert map field value for a message map field.
Args:
value: A JSON object to convert the map field value.
message: A protocol message to record the converted data.
field: The descriptor of the map field to be converted.
path: parent path to log parse error info.
Raises:
ParseError: In case of convert problems.
"""
if not isinstance(value, dict):
raise ParseError(
'Map field {0} must be in a dict which is {1} at {2}'.format(
field.name, value, path))
key_field = field.message_type.fields_by_name['key']
value_field = field.message_type.fields_by_name['value']
for key in value:
key_value = _ConvertScalarFieldValue(key, key_field,
'{0}.key'.format(path), True)
if value_field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
self.ConvertMessage(value[key],
getattr(message, field.name)[key_value],
'{0}[{1}]'.format(path, key_value))
else:
getattr(message, field.name)[key_value] = _ConvertScalarFieldValue(
value[key], value_field, path='{0}[{1}]'.format(path, key_value))
def _ConvertScalarFieldValue(value, field, path, require_str=False):
"""Convert a single scalar field value.
Args:
value: A scalar value to convert the scalar field value.
field: The descriptor of the field to convert.
path: parent path to log parse error info.
require_str: If True, the field value must be a str.
Returns:
The converted scalar field value
Raises:
ParseError: In case of convert problems.
"""
try:
if field.cpp_type in _INT_TYPES:
return _ConvertInteger(value)
elif field.cpp_type in _FLOAT_TYPES:
return _ConvertFloat(value, field)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:
return _ConvertBool(value, require_str)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:
if field.type == descriptor.FieldDescriptor.TYPE_BYTES:
if isinstance(value, str):
encoded = value.encode('utf-8')
else:
encoded = value
# Add extra padding '='
padded_value = encoded + b'=' * (4 - len(encoded) % 4)
return base64.urlsafe_b64decode(padded_value)
else:
# Checking for unpaired surrogates appears to be unreliable,
# depending on the specific Python version, so we check manually.
if _UNPAIRED_SURROGATE_PATTERN.search(value):
raise ParseError('Unpaired surrogate')
return value
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:
# Convert an enum value.
enum_value = field.enum_type.values_by_name.get(value, None)
if enum_value is None:
try:
number = int(value)
enum_value = field.enum_type.values_by_number.get(number, None)
except ValueError:
raise ParseError('Invalid enum value {0} for enum type {1}'.format(
value, field.enum_type.full_name))
if enum_value is None:
if field.file.syntax == 'proto3':
# Proto3 accepts unknown enums.
return number
raise ParseError('Invalid enum value {0} for enum type {1}'.format(
value, field.enum_type.full_name))
return enum_value.number
except ParseError as e:
raise ParseError('{0} at {1}'.format(e, path))
def _ConvertInteger(value):
"""Convert an integer.
Args:
value: A scalar value to convert.
Returns:
The integer value.
Raises:
ParseError: If an integer couldn't be consumed.
"""
if isinstance(value, float) and not value.is_integer():
raise ParseError('Couldn\'t parse integer: {0}'.format(value))
if isinstance(value, str) and value.find(' ') != -1:
raise ParseError('Couldn\'t parse integer: "{0}"'.format(value))
if isinstance(value, bool):
raise ParseError('Bool value {0} is not acceptable for '
'integer field'.format(value))
return int(value)
def _ConvertFloat(value, field):
"""Convert an floating point number."""
if isinstance(value, float):
if math.isnan(value):
raise ParseError('Couldn\'t parse NaN, use quoted "NaN" instead')
if math.isinf(value):
if value > 0:
raise ParseError('Couldn\'t parse Infinity or value too large, '
'use quoted "Infinity" instead')
else:
raise ParseError('Couldn\'t parse -Infinity or value too small, '
'use quoted "-Infinity" instead')
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT:
# pylint: disable=protected-access
if value > type_checkers._FLOAT_MAX:
raise ParseError('Float value too large')
# pylint: disable=protected-access
if value < type_checkers._FLOAT_MIN:
raise ParseError('Float value too small')
if value == 'nan':
raise ParseError('Couldn\'t parse float "nan", use "NaN" instead')
try:
# Assume Python compatible syntax.
return float(value)
except ValueError:
# Check alternative spellings.
if value == _NEG_INFINITY:
return float('-inf')
elif value == _INFINITY:
return float('inf')
elif value == _NAN:
return float('nan')
else:
raise ParseError('Couldn\'t parse float: {0}'.format(value))
def _ConvertBool(value, require_str):
"""Convert a boolean value.
Args:
value: A scalar value to convert.
require_str: If True, value must be a str.
Returns:
The bool parsed.
Raises:
ParseError: If a boolean value couldn't be consumed.
"""
if require_str:
if value == 'true':
return True
elif value == 'false':
return False
else:
raise ParseError('Expected "true" or "false", not {0}'.format(value))
if not isinstance(value, bool):
raise ParseError('Expected true or false without quotes')
return value
_WKTJSONMETHODS = {
'google.protobuf.Any': ['_AnyMessageToJsonObject',
'_ConvertAnyMessage'],
'google.protobuf.Duration': ['_GenericMessageToJsonObject',
'_ConvertGenericMessage'],
'google.protobuf.FieldMask': ['_GenericMessageToJsonObject',
'_ConvertGenericMessage'],
'google.protobuf.ListValue': ['_ListValueMessageToJsonObject',
'_ConvertListValueMessage'],
'google.protobuf.Struct': ['_StructMessageToJsonObject',
'_ConvertStructMessage'],
'google.protobuf.Timestamp': ['_GenericMessageToJsonObject',
'_ConvertGenericMessage'],
'google.protobuf.Value': ['_ValueMessageToJsonObject',
'_ConvertValueMessage']
}

View File

@ -0,0 +1,424 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# TODO(robinson): We should just make these methods all "pure-virtual" and move
# all implementation out, into reflection.py for now.
"""Contains an abstract base class for protocol messages."""
__author__ = 'robinson@google.com (Will Robinson)'
class Error(Exception):
"""Base error type for this module."""
pass
class DecodeError(Error):
"""Exception raised when deserializing messages."""
pass
class EncodeError(Error):
"""Exception raised when serializing messages."""
pass
class Message(object):
"""Abstract base class for protocol messages.
Protocol message classes are almost always generated by the protocol
compiler. These generated types subclass Message and implement the methods
shown below.
"""
# TODO(robinson): Link to an HTML document here.
# TODO(robinson): Document that instances of this class will also
# have an Extensions attribute with __getitem__ and __setitem__.
# Again, not sure how to best convey this.
# TODO(robinson): Document that the class must also have a static
# RegisterExtension(extension_field) method.
# Not sure how to best express at this point.
# TODO(robinson): Document these fields and methods.
__slots__ = []
#: The :class:`google.protobuf.descriptor.Descriptor` for this message type.
DESCRIPTOR = None
def __deepcopy__(self, memo=None):
clone = type(self)()
clone.MergeFrom(self)
return clone
def __eq__(self, other_msg):
"""Recursively compares two messages by value and structure."""
raise NotImplementedError
def __ne__(self, other_msg):
# Can't just say self != other_msg, since that would infinitely recurse. :)
return not self == other_msg
def __hash__(self):
raise TypeError('unhashable object')
def __str__(self):
"""Outputs a human-readable representation of the message."""
raise NotImplementedError
def __unicode__(self):
"""Outputs a human-readable representation of the message."""
raise NotImplementedError
def MergeFrom(self, other_msg):
"""Merges the contents of the specified message into current message.
This method merges the contents of the specified message into the current
message. Singular fields that are set in the specified message overwrite
the corresponding fields in the current message. Repeated fields are
appended. Singular sub-messages and groups are recursively merged.
Args:
other_msg (Message): A message to merge into the current message.
"""
raise NotImplementedError
def CopyFrom(self, other_msg):
"""Copies the content of the specified message into the current message.
The method clears the current message and then merges the specified
message using MergeFrom.
Args:
other_msg (Message): A message to copy into the current one.
"""
if self is other_msg:
return
self.Clear()
self.MergeFrom(other_msg)
def Clear(self):
"""Clears all data that was set in the message."""
raise NotImplementedError
def SetInParent(self):
"""Mark this as present in the parent.
This normally happens automatically when you assign a field of a
sub-message, but sometimes you want to make the sub-message
present while keeping it empty. If you find yourself using this,
you may want to reconsider your design.
"""
raise NotImplementedError
def IsInitialized(self):
"""Checks if the message is initialized.
Returns:
bool: The method returns True if the message is initialized (i.e. all of
its required fields are set).
"""
raise NotImplementedError
# TODO(robinson): MergeFromString() should probably return None and be
# implemented in terms of a helper that returns the # of bytes read. Our
# deserialization routines would use the helper when recursively
# deserializing, but the end user would almost always just want the no-return
# MergeFromString().
def MergeFromString(self, serialized):
"""Merges serialized protocol buffer data into this message.
When we find a field in `serialized` that is already present
in this message:
- If it's a "repeated" field, we append to the end of our list.
- Else, if it's a scalar, we overwrite our field.
- Else, (it's a nonrepeated composite), we recursively merge
into the existing composite.
Args:
serialized (bytes): Any object that allows us to call
``memoryview(serialized)`` to access a string of bytes using the
buffer interface.
Returns:
int: The number of bytes read from `serialized`.
For non-group messages, this will always be `len(serialized)`,
but for messages which are actually groups, this will
generally be less than `len(serialized)`, since we must
stop when we reach an ``END_GROUP`` tag. Note that if
we *do* stop because of an ``END_GROUP`` tag, the number
of bytes returned does not include the bytes
for the ``END_GROUP`` tag information.
Raises:
DecodeError: if the input cannot be parsed.
"""
# TODO(robinson): Document handling of unknown fields.
# TODO(robinson): When we switch to a helper, this will return None.
raise NotImplementedError
def ParseFromString(self, serialized):
"""Parse serialized protocol buffer data into this message.
Like :func:`MergeFromString()`, except we clear the object first.
Raises:
message.DecodeError if the input cannot be parsed.
"""
self.Clear()
return self.MergeFromString(serialized)
def SerializeToString(self, **kwargs):
"""Serializes the protocol message to a binary string.
Keyword Args:
deterministic (bool): If true, requests deterministic serialization
of the protobuf, with predictable ordering of map keys.
Returns:
A binary string representation of the message if all of the required
fields in the message are set (i.e. the message is initialized).
Raises:
EncodeError: if the message isn't initialized (see :func:`IsInitialized`).
"""
raise NotImplementedError
def SerializePartialToString(self, **kwargs):
"""Serializes the protocol message to a binary string.
This method is similar to SerializeToString but doesn't check if the
message is initialized.
Keyword Args:
deterministic (bool): If true, requests deterministic serialization
of the protobuf, with predictable ordering of map keys.
Returns:
bytes: A serialized representation of the partial message.
"""
raise NotImplementedError
# TODO(robinson): Decide whether we like these better
# than auto-generated has_foo() and clear_foo() methods
# on the instances themselves. This way is less consistent
# with C++, but it makes reflection-type access easier and
# reduces the number of magically autogenerated things.
#
# TODO(robinson): Be sure to document (and test) exactly
# which field names are accepted here. Are we case-sensitive?
# What do we do with fields that share names with Python keywords
# like 'lambda' and 'yield'?
#
# nnorwitz says:
# """
# Typically (in python), an underscore is appended to names that are
# keywords. So they would become lambda_ or yield_.
# """
def ListFields(self):
"""Returns a list of (FieldDescriptor, value) tuples for present fields.
A message field is non-empty if HasField() would return true. A singular
primitive field is non-empty if HasField() would return true in proto2 or it
is non zero in proto3. A repeated field is non-empty if it contains at least
one element. The fields are ordered by field number.
Returns:
list[tuple(FieldDescriptor, value)]: field descriptors and values
for all fields in the message which are not empty. The values vary by
field type.
"""
raise NotImplementedError
def HasField(self, field_name):
"""Checks if a certain field is set for the message.
For a oneof group, checks if any field inside is set. Note that if the
field_name is not defined in the message descriptor, :exc:`ValueError` will
be raised.
Args:
field_name (str): The name of the field to check for presence.
Returns:
bool: Whether a value has been set for the named field.
Raises:
ValueError: if the `field_name` is not a member of this message.
"""
raise NotImplementedError
def ClearField(self, field_name):
"""Clears the contents of a given field.
Inside a oneof group, clears the field set. If the name neither refers to a
defined field or oneof group, :exc:`ValueError` is raised.
Args:
field_name (str): The name of the field to check for presence.
Raises:
ValueError: if the `field_name` is not a member of this message.
"""
raise NotImplementedError
def WhichOneof(self, oneof_group):
"""Returns the name of the field that is set inside a oneof group.
If no field is set, returns None.
Args:
oneof_group (str): the name of the oneof group to check.
Returns:
str or None: The name of the group that is set, or None.
Raises:
ValueError: no group with the given name exists
"""
raise NotImplementedError
def HasExtension(self, extension_handle):
"""Checks if a certain extension is present for this message.
Extensions are retrieved using the :attr:`Extensions` mapping (if present).
Args:
extension_handle: The handle for the extension to check.
Returns:
bool: Whether the extension is present for this message.
Raises:
KeyError: if the extension is repeated. Similar to repeated fields,
there is no separate notion of presence: a "not present" repeated
extension is an empty list.
"""
raise NotImplementedError
def ClearExtension(self, extension_handle):
"""Clears the contents of a given extension.
Args:
extension_handle: The handle for the extension to clear.
"""
raise NotImplementedError
def UnknownFields(self):
"""Returns the UnknownFieldSet.
Returns:
UnknownFieldSet: The unknown fields stored in this message.
"""
raise NotImplementedError
def DiscardUnknownFields(self):
"""Clears all fields in the :class:`UnknownFieldSet`.
This operation is recursive for nested message.
"""
raise NotImplementedError
def ByteSize(self):
"""Returns the serialized size of this message.
Recursively calls ByteSize() on all contained messages.
Returns:
int: The number of bytes required to serialize this message.
"""
raise NotImplementedError
@classmethod
def FromString(cls, s):
raise NotImplementedError
@staticmethod
def RegisterExtension(extension_handle):
raise NotImplementedError
def _SetListener(self, message_listener):
"""Internal method used by the protocol message implementation.
Clients should not call this directly.
Sets a listener that this message will call on certain state transitions.
The purpose of this method is to register back-edges from children to
parents at runtime, for the purpose of setting "has" bits and
byte-size-dirty bits in the parent and ancestor objects whenever a child or
descendant object is modified.
If the client wants to disconnect this Message from the object tree, she
explicitly sets callback to None.
If message_listener is None, unregisters any existing listener. Otherwise,
message_listener must implement the MessageListener interface in
internal/message_listener.py, and we discard any listener registered
via a previous _SetListener() call.
"""
raise NotImplementedError
def __getstate__(self):
"""Support the pickle protocol."""
return dict(serialized=self.SerializePartialToString())
def __setstate__(self, state):
"""Support the pickle protocol."""
self.__init__()
serialized = state['serialized']
# On Python 3, using encoding='latin1' is required for unpickling
# protos pickled by Python 2.
if not isinstance(serialized, bytes):
serialized = serialized.encode('latin1')
self.ParseFromString(serialized)
def __reduce__(self):
message_descriptor = self.DESCRIPTOR
if message_descriptor.containing_type is None:
return type(self), (), self.__getstate__()
# the message type must be nested.
# Python does not pickle nested classes; use the symbol_database on the
# receiving end.
container = message_descriptor
return (_InternalConstructMessage, (container.full_name,),
self.__getstate__())
def _InternalConstructMessage(full_name):
"""Constructs a nested message."""
from google.protobuf import symbol_database # pylint:disable=g-import-not-at-top
return symbol_database.Default().GetSymbol(full_name)()

View File

@ -0,0 +1,185 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides a factory class for generating dynamic messages.
The easiest way to use this class is if you have access to the FileDescriptor
protos containing the messages you want to create you can just do the following:
message_classes = message_factory.GetMessages(iterable_of_file_descriptors)
my_proto_instance = message_classes['some.proto.package.MessageName']()
"""
__author__ = 'matthewtoia@google.com (Matt Toia)'
from google.protobuf.internal import api_implementation
from google.protobuf import descriptor_pool
from google.protobuf import message
if api_implementation.Type() == 'cpp':
from google.protobuf.pyext import cpp_message as message_impl
else:
from google.protobuf.internal import python_message as message_impl
# The type of all Message classes.
_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType
class MessageFactory(object):
"""Factory for creating Proto2 messages from descriptors in a pool."""
def __init__(self, pool=None):
"""Initializes a new factory."""
self.pool = pool or descriptor_pool.DescriptorPool()
# local cache of all classes built from protobuf descriptors
self._classes = {}
def GetPrototype(self, descriptor):
"""Obtains a proto2 message class based on the passed in descriptor.
Passing a descriptor with a fully qualified name matching a previous
invocation will cause the same class to be returned.
Args:
descriptor: The descriptor to build from.
Returns:
A class describing the passed in descriptor.
"""
if descriptor not in self._classes:
result_class = self.CreatePrototype(descriptor)
# The assignment to _classes is redundant for the base implementation, but
# might avoid confusion in cases where CreatePrototype gets overridden and
# does not call the base implementation.
self._classes[descriptor] = result_class
return result_class
return self._classes[descriptor]
def CreatePrototype(self, descriptor):
"""Builds a proto2 message class based on the passed in descriptor.
Don't call this function directly, it always creates a new class. Call
GetPrototype() instead. This method is meant to be overridden in subblasses
to perform additional operations on the newly constructed class.
Args:
descriptor: The descriptor to build from.
Returns:
A class describing the passed in descriptor.
"""
descriptor_name = descriptor.name
result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE(
descriptor_name,
(message.Message,),
{
'DESCRIPTOR': descriptor,
# If module not set, it wrongly points to message_factory module.
'__module__': None,
})
result_class._FACTORY = self # pylint: disable=protected-access
# Assign in _classes before doing recursive calls to avoid infinite
# recursion.
self._classes[descriptor] = result_class
for field in descriptor.fields:
if field.message_type:
self.GetPrototype(field.message_type)
for extension in result_class.DESCRIPTOR.extensions:
if extension.containing_type not in self._classes:
self.GetPrototype(extension.containing_type)
extended_class = self._classes[extension.containing_type]
extended_class.RegisterExtension(extension)
return result_class
def GetMessages(self, files):
"""Gets all the messages from a specified file.
This will find and resolve dependencies, failing if the descriptor
pool cannot satisfy them.
Args:
files: The file names to extract messages from.
Returns:
A dictionary mapping proto names to the message classes. This will include
any dependent messages as well as any messages defined in the same file as
a specified message.
"""
result = {}
for file_name in files:
file_desc = self.pool.FindFileByName(file_name)
for desc in file_desc.message_types_by_name.values():
result[desc.full_name] = self.GetPrototype(desc)
# While the extension FieldDescriptors are created by the descriptor pool,
# the python classes created in the factory need them to be registered
# explicitly, which is done below.
#
# The call to RegisterExtension will specifically check if the
# extension was already registered on the object and either
# ignore the registration if the original was the same, or raise
# an error if they were different.
for extension in file_desc.extensions_by_name.values():
if extension.containing_type not in self._classes:
self.GetPrototype(extension.containing_type)
extended_class = self._classes[extension.containing_type]
extended_class.RegisterExtension(extension)
return result
_FACTORY = MessageFactory()
def GetMessages(file_protos):
"""Builds a dictionary of all the messages available in a set of files.
Args:
file_protos: Iterable of FileDescriptorProto to build messages out of.
Returns:
A dictionary mapping proto names to the message classes. This will include
any dependent messages as well as any messages defined in the same file as
a specified message.
"""
# The cpp implementation of the protocol buffer library requires to add the
# message in topological order of the dependency graph.
file_by_name = {file_proto.name: file_proto for file_proto in file_protos}
def _AddFile(file_proto):
for dependency in file_proto.dependency:
if dependency in file_by_name:
# Remove from elements to be visited, in order to cut cycles.
_AddFile(file_by_name.pop(dependency))
_FACTORY.pool.Add(file_proto)
while file_by_name:
_AddFile(file_by_name.popitem()[1])
return _FACTORY.GetMessages([file_proto.name for file_proto in file_protos])

Some files were not shown because too many files have changed in this diff Show More