Compare commits
387 Commits
sherlock_i
...
master
Author | SHA1 | Date |
---|---|---|
WTMike24 | 0aaffbf5f0 | 3 years ago |
bobloy | 771d1457e5 | 3 years ago |
bobloy | d191abf6bd | 3 years ago |
bobloy | aba9840bcb | 3 years ago |
bobloy | 0064e6e9b5 | 3 years ago |
bobloy | 955624ad1a | 3 years ago |
bobloy | a6e0646b85 | 4 years ago |
bobloy | 269266ce04 | 4 years ago |
bobloy | db3ce30122 | 4 years ago |
bobloy | 1e87eacf83 | 4 years ago |
bobloy | 61fa006e33 | 4 years ago |
bobloy | 4183607372 | 4 years ago |
bobloy | 2421c4e9bf | 4 years ago |
bobloy | b2e843e781 | 4 years ago |
aleclol | a7ce815e14 | 4 years ago |
bobloy | 1e535c2f3e | 4 years ago |
bobloy | fd80819618 | 4 years ago |
bobloy | a5ff888f4c | 4 years ago |
bobloy | 3eb499bf0e | 4 years ago |
bobloy | 698dafade4 | 4 years ago |
Brad Duncan | 15ecf72c64 | 4 years ago |
Brad Duncan | 1d514f80c6 | 4 years ago |
bobloy | b752bfd153 | 4 years ago |
bobloy | 47269ba8f4 | 4 years ago |
bobloy | e0a361b952 | 4 years ago |
bobloy | 2a18760a83 | 4 years ago |
bobloy | b26afdf2db | 4 years ago |
bobloy | 86cc1fa35a | 4 years ago |
aleclol | b331238c1c | 4 years ago |
bobloy | 6f0c88b1ac | 4 years ago |
bobloy | c165313031 | 4 years ago |
bobloy | 9c63c12656 | 4 years ago |
bobloy | a55ae8a511 | 4 years ago |
Alexander Soloviev | 1ffac638ed | 4 years ago |
Alexander Soloviev | 5b75002c88 | 4 years ago |
Alexander Soloviev | 7c08a5de0c | 4 years ago |
bobloy | bc12aa866e | 4 years ago |
bobloy | dbafd6ffd7 | 4 years ago |
aleclol | 52ca2f6a45 | 4 years ago |
bobloy | 2937b6ac92 | 4 years ago |
bobloy | dbf84c8b81 | 4 years ago |
PhenoM4n4n | 2f21de6a97 | 4 years ago |
bobloy | ed6cc433c8 | 4 years ago |
bobloy | e1a30359d8 | 4 years ago |
bobloy | 506a79c6d6 | 4 years ago |
bobloy | 1ddcd98078 | 4 years ago |
bobloy | 7bd004c78e | 4 years ago |
bobloy | 184833421a | 4 years ago |
Kreusada | f04ff6886b | 4 years ago |
bobloy | 28edcc1fdd | 4 years ago |
bobloy | 10ed1f9b9f | 4 years ago |
bobloy | 93b403b35f | 4 years ago |
bobloy | 5f30bc1234 | 4 years ago |
bobloy | 9f22dfb790 | 4 years ago |
bobloy | ea126db0c5 | 4 years ago |
bobloy | e1297a4dca | 4 years ago |
bobloy | 87187abbb3 | 4 years ago |
bobloy | db24bb4db4 | 4 years ago |
bobloy | 1319d98972 | 4 years ago |
bobloy | 802929d757 | 4 years ago |
bobloy | 59fd96fc5a | 4 years ago |
bobloy | b4f20dd7d2 | 4 years ago |
bobloy | ac9cf1e589 | 4 years ago |
bobloy | 8feb21e34b | 4 years ago |
bobloy | 04ccb435f8 | 4 years ago |
bobloy | eac7aee82c | 4 years ago |
bobloy | 8200cd9af1 | 4 years ago |
bobloy | ff9610ff77 | 4 years ago |
bobloy | 95931d24f3 | 4 years ago |
bobloy | dad14fe972 | 4 years ago |
bobloy | ea88addc42 | 4 years ago |
Sourcery AI | 0475b18437 | 4 years ago |
bobloy | 7811c71edb | 4 years ago |
bobloy | 8acbc5d964 | 4 years ago |
bobloy | 59090b9eaa | 4 years ago |
bobloy | 11eb4a9dbf | 4 years ago |
bobloy | d32de1586f | 4 years ago |
bobloy | 578ea4a555 | 4 years ago |
bobloy | 920f8817d7 | 4 years ago |
Antoine Rybacki | 42bdc64028 | 4 years ago |
Antoine Rybacki | f7dad0aa3f | 4 years ago |
bobloy | cc199c395d | 4 years ago |
Antoine Rybacki | 5a26b48fda | 4 years ago |
bobloy | bf9115e13c | 4 years ago |
bobloy | a5eda8ca2a | 4 years ago |
bobloy | 221ca4074b | 4 years ago |
bobloy | 8b1ac18609 | 4 years ago |
bobloy | ee8f6bbf57 | 4 years ago |
bobloy | af3de08da2 | 4 years ago |
Obi-Wan3 | 92957bcb1f | 4 years ago |
Obi-Wan3 | 7ad6b15641 | 4 years ago |
Obi-Wan3 | 6363f5eadc | 4 years ago |
bobloy | 0e034d83ef | 4 years ago |
bobloy | 337def2fa3 | 4 years ago |
bobloy | 14f8b825d8 | 4 years ago |
bobloy | dbf6ba5a4b | 4 years ago |
bobloy | bf16630573 | 4 years ago |
bobloy | 6233db2272 | 4 years ago |
bobloy | 3f997fa804 | 4 years ago |
bobloy | 837cff7a26 | 4 years ago |
bobloy | 796edb4d35 | 4 years ago |
bobloy | 6c669dd170 | 4 years ago |
bobloy | 9bdaf73944 | 4 years ago |
bobloy | 3b50785c5b | 4 years ago |
bobloy | d14db16746 | 4 years ago |
bobloy | 2c9f3838da | 4 years ago |
bobloy | 9f10ea262d | 4 years ago |
bobloy | 320f729cc9 | 4 years ago |
bobloy | 9c9b46dc76 | 4 years ago |
bobloy | d5bc5993ea | 4 years ago |
bobloy | ce41c80c3b | 4 years ago |
bobloy | c603e4b326 | 4 years ago |
bobloy | d13fd39cfc | 4 years ago |
bobloy | 8dc81808e6 | 4 years ago |
bobloy | b2c8268c9b | 4 years ago |
bobloy | f3dab0f0c6 | 4 years ago |
bobloy | bae50f6a7a | 4 years ago |
bobloy | 5892bed5b9 | 4 years ago |
bobloy | 36826a44e7 | 4 years ago |
bobloy | 0a0e8650e4 | 4 years ago |
bobloy | d36493f5a8 | 4 years ago |
bobloy | fc8e465c33 | 4 years ago |
bobloy | 087b10deb2 | 4 years ago |
bobloy | bf3c292fee | 4 years ago |
bobloy | c7820ec40c | 4 years ago |
bobloy | b566b58e1a | 4 years ago |
ASSASSIN0831 | 69e2e5acb3 | 4 years ago |
ASSASSIN0831 | bce07f069f | 4 years ago |
ASSASSIN0831 | 9ac89aa369 | 4 years ago |
bobloy | 624e8863b1 | 4 years ago |
bobloy | 51dc2e62d4 | 4 years ago |
bobloy | d85f166062 | 4 years ago |
bobloy | 5ecb8dc826 | 4 years ago |
bobloy | 9411fff5e8 | 4 years ago |
bobloy | 0ff56d933b | 4 years ago |
bobloy | 8ab6c50625 | 4 years ago |
bobloy | 19ee6e6f24 | 4 years ago |
bobloy | b2ebddc825 | 4 years ago |
bobloy | 907fb76574 | 4 years ago |
bobloy | 3c3dd2d6cd | 4 years ago |
bobloy | a946b1b83b | 4 years ago |
bobloy | ca8c762e69 | 4 years ago |
bobloy | 721316a14e | 4 years ago |
bobloy | 40b01cff26 | 4 years ago |
bobloy | 2ea077bb0c | 4 years ago |
bobloy | 99ab9fc1b4 | 4 years ago |
bobloy | a2948322f9 | 4 years ago |
bobloy | 419863b07a | 4 years ago |
bobloy | 8015e4a46d | 4 years ago |
bobloy | 1f1d116a56 | 4 years ago |
bobloy | db538f7530 | 4 years ago |
bobloy | 3bb6af9b9b | 4 years ago |
bobloy | fc0870af68 | 4 years ago |
bobloy | 5fffaf4893 | 4 years ago |
bobloy | 477364f9bf | 4 years ago |
bobloy | 6e2c62d897 | 4 years ago |
bobloy | 675e9b82c8 | 4 years ago |
bobloy | 4634210960 | 4 years ago |
bobloy | a6ebe02233 | 4 years ago |
bobloy | 26234e3b18 | 4 years ago |
bobloy | 37c699eeee | 4 years ago |
bobloy | 5f58d1d658 | 4 years ago |
bobloy | 5ddafff59f | 4 years ago |
bobloy | d71e3afb86 | 4 years ago |
bobloy | 5611f7abe7 | 4 years ago |
bobloy | 1e8d1efb57 | 4 years ago |
bobloy | a92c373b49 | 4 years ago |
bobloy | 60806fb19c | 4 years ago |
bobloy | 4c1cd86930 | 4 years ago |
bobloy | 10767da507 | 4 years ago |
bobloy | c63a4923e7 | 4 years ago |
bobloy | b141accbd9 | 4 years ago |
bobloy | 44035b78f7 | 4 years ago |
bobloy | 815cfcb031 | 4 years ago |
bobloy | 8e0105355c | 4 years ago |
bobloy | ffbed8cb9a | 4 years ago |
bobloy | 5752ba6056 | 4 years ago |
bobloy | 479b23f0f3 | 4 years ago |
bobloy | 54be5addb5 | 4 years ago |
bobloy | 9440f34669 | 4 years ago |
bobloy | 7c95bd4c0f | 4 years ago |
bobloy | 3fceea634b | 4 years ago |
bobloy | b210f4a9ff | 4 years ago |
bobloy | da754e3cb2 | 4 years ago |
BogdanWDK | 960b66a5b8 | 4 years ago |
bobloy | 20d8acc800 | 4 years ago |
bobloy | 266b0a485d | 4 years ago |
bobloy | d0445f41c7 | 4 years ago |
bobloy | 7f8d0f13f7 | 4 years ago |
bobloy | c529d792e6 | 4 years ago |
bobloy | 62a70c52c6 | 4 years ago |
bobloy | 19104241d7 | 4 years ago |
bobloy | 8a4893c5f5 | 4 years ago |
bobloy | 9ca5d37f7e | 4 years ago |
bobloy | f3965b73d8 | 4 years ago |
bobloy | e27cfba763 | 4 years ago |
bobloy | 211df56e1b | 4 years ago |
bobloy | 443c84ccab | 4 years ago |
bobloy | c7d320ccaa | 4 years ago |
bobloy | 2ab87866dd | 4 years ago |
bobloy | a691b2b85a | 4 years ago |
bobloy | 8c0a1db06f | 4 years ago |
bobloy | cd89bd87e9 | 4 years ago |
bobloy | 31c2e77be6 | 4 years ago |
bobloy | 3a6d3df374 | 4 years ago |
bobloy | 94aceb32e8 | 4 years ago |
bobloy | 693964183c | 4 years ago |
bobloy | 8a42b87bd6 | 4 years ago |
bobloy | a36a800b45 | 4 years ago |
bobloy | af41d079d3 | 4 years ago |
bobloy | ad66d171d4 | 4 years ago |
bobloy | b9d8be397c | 4 years ago |
bobloy | 70d50c5e97 | 4 years ago |
bobloy | b27b252e6f | 4 years ago |
bobloy | ab1b069ee9 | 4 years ago |
bobloy | af4cd92488 | 4 years ago |
bobloy | 03f0ef17be | 4 years ago |
bobloy | db1d64ae3e | 4 years ago |
bobloy | 029b6a51b1 | 4 years ago |
bobloy | 61049c2343 | 4 years ago |
bobloy | eb0c79ef1d | 4 years ago |
bobloy | a0c645bd28 | 4 years ago |
bobloy | 762b0fd320 | 4 years ago |
bobloy | 224ff93531 | 4 years ago |
bobloy | f263f97cc2 | 4 years ago |
bobloy | 61d1313411 | 4 years ago |
bobloy | cb0a7f1041 | 4 years ago |
bobloy | 39801aada9 | 4 years ago |
bobloy | eaa3e0a2f7 | 4 years ago |
bobloy | 8ffc8cc707 | 4 years ago |
bobloy | 84ed2728e7 | 4 years ago |
bobloy | 596865e49d | 4 years ago |
bobloy | 5940ab1af9 | 4 years ago |
bobloy | 9f17bca226 | 4 years ago |
bobloy | bdcb74587e | 4 years ago |
bobloy | 8531ff5f91 | 4 years ago |
bobloy | bed6cf8bb7 | 4 years ago |
bobloy | d13331d52d | 4 years ago |
bobloy | 9ea57ee99a | 4 years ago |
bobloy | 608f425965 | 4 years ago |
bobloy | b04e82fa1d | 4 years ago |
bobloy | f05a8bf4f6 | 4 years ago |
bobloy | bf81d7c157 | 4 years ago |
bobloy | e0042780a1 | 4 years ago |
bobloy | 98ae481d14 | 4 years ago |
bobloy | 29aa493033 | 4 years ago |
bobloy | 7109471c35 | 4 years ago |
bobloy | a2eaf55515 | 4 years ago |
bobloy | 06af229a62 | 4 years ago |
bobloy | 8a3f45bdc1 | 4 years ago |
bobloy | ec5d713fa0 | 4 years ago |
bobloy | 1723dc381d | 4 years ago |
bobloy | c428fa3131 | 4 years ago |
bobloy | f69e8fdb1a | 4 years ago |
bobloy | 28bf2a73e1 | 4 years ago |
bobloy | a046102549 | 4 years ago |
bobloy | fe1f11b2eb | 4 years ago |
bobloy | 7e1a6e108e | 4 years ago |
bobloy | 339492d6d9 | 4 years ago |
bobloy | 7a9fb922bd | 4 years ago |
bobloy | 18e5cc12ff | 4 years ago |
bobloy | 8ecdf45fa7 | 4 years ago |
bobloy | 88ef475339 | 4 years ago |
bobloy | 55656ea672 | 4 years ago |
jack1142 | eddac5b8b2 | 4 years ago |
bobloy | 58054c7a92 | 4 years ago |
bobloy | 2e000b1190 | 4 years ago |
bobloy | 360f294ca0 | 4 years ago |
bobloy | 260a3bc62d | 4 years ago |
bobloy | 7092bd590b | 4 years ago |
bobloy | 67a02f971e | 4 years ago |
bobloy | cb6693f382 | 4 years ago |
bobloy | f9388454a5 | 4 years ago |
bobloy | a4f8fed4e5 | 4 years ago |
bobloy | 92caf16fe9 | 4 years ago |
bobloy | ebac7b249d | 4 years ago |
bobloy | 6af1d06b2c | 4 years ago |
bobloy | b8aceb003e | 4 years ago |
bobloy | 2e65c137f3 | 4 years ago |
bobloy | d377461602 | 4 years ago |
bobloy | 5eb31a277d | 4 years ago |
bobloy | 9f6a05ae88 | 4 years ago |
bobloy | d619c9a502 | 4 years ago |
bobloy | 7c43d6c8ac | 4 years ago |
bobloy | 0ec877d5f9 | 4 years ago |
bobloy | e13518dc42 | 4 years ago |
bobloy | cc95290290 | 4 years ago |
bobloy | 12d0b2944e | 4 years ago |
bobloy | 3d64bcf768 | 4 years ago |
bobloy | 4f494d115d | 4 years ago |
bobloy | 607b7b6718 | 4 years ago |
bobloy | e1d314cc83 | 4 years ago |
bobloy | f24183d4f2 | 4 years ago |
bobloy | c34929a93e | 4 years ago |
bobloy | 19fcf7bc06 | 4 years ago |
bobloy | 68690473c0 | 4 years ago |
bobloy | 16af7c06c8 | 4 years ago |
bobloy | cb2a608dfb | 4 years ago |
bobloy | 2fa558b306 | 4 years ago |
bobloy | 037e091207 | 4 years ago |
bobloy | c747369667 | 4 years ago |
bobloy | 88a2049a53 | 4 years ago |
bobloy | 71aa4c3048 | 4 years ago |
bobloy | 2c38e05ed0 | 4 years ago |
bobloy | 53eda2d9a8 | 4 years ago |
bobloy | 636b3ee975 | 4 years ago |
bobloy | e602b5c868 | 4 years ago |
bobloy | c6a9116a92 | 4 years ago |
bobloy | 1a5aaff268 | 4 years ago |
bobloy | ea0cb8c51b | 4 years ago |
bobloy | 4ca97437db | 4 years ago |
bobloy | 6f414be6ab | 4 years ago |
bobloy | c2c6d61a35 | 4 years ago |
bobloy | 834a0f462d | 4 years ago |
bobloy | 43e2a46c55 | 4 years ago |
bobloy | 2ea7819b8b | 4 years ago |
bobloy | a0042da170 | 4 years ago |
bobloy | d8ec75701d | 4 years ago |
bobloy | 5693d690e1 | 4 years ago |
bobloy | 79cc34d331 | 4 years ago |
bobloy | ec6fa7331b | 4 years ago |
bobloy | 5af7adfe57 | 4 years ago |
bobloy | 59c22dcc3a | 4 years ago |
bobloy | 9ddb1696ac | 4 years ago |
bobloy | c62cde4159 | 4 years ago |
bobloy | 317a9cb3a9 | 4 years ago |
bobloy | 67e7c3bae8 | 4 years ago |
bobloy | dcf1c85ebc | 4 years ago |
bobloy | ba03fb5127 | 4 years ago |
bobloy | 4a9f0b9e74 | 4 years ago |
bobloy | b7ad892b7f | 4 years ago |
bobloy | 438a1be410 | 4 years ago |
bobloy | 22ed50dd98 | 4 years ago |
bobloy | 340f1dbff4 | 4 years ago |
bobloy | f8fcb4c736 | 4 years ago |
bobloy | c953bad13e | 4 years ago |
bobloy | 29f44e887f | 4 years ago |
bobloy | d43a1ec80c | 4 years ago |
bobloy | 151eca1c76 | 4 years ago |
bobloy | 3d4a6578fd | 4 years ago |
bobloy | 6a76d43c3d | 4 years ago |
bobloy | 7507141ec4 | 4 years ago |
bobloy | 62b6209ae9 | 4 years ago |
bobloy | e570058014 | 5 years ago |
bobloy | 5c0b0b6bff | 5 years ago |
bobloy | 58ad3fc978 | 5 years ago |
bobloy | 79b755556e | 5 years ago |
bobloy | 53649137a2 | 5 years ago |
bobloy | c302be7fb5 | 5 years ago |
bobloy | 5c80beea44 | 5 years ago |
bobloy | f9ae2d6f7e | 5 years ago |
bobloy | 4fcc12a2d8 | 5 years ago |
bobloy | f55bc4d583 | 5 years ago |
bobloy | cb03c17459 | 5 years ago |
bobloy | a787835915 | 5 years ago |
bobloy | 361265b983 | 5 years ago |
bobloy | 8b7ebc57c3 | 5 years ago |
bobloy | e5947953aa | 5 years ago |
bobloy | c02244ceb5 | 5 years ago |
bobloy | e8eb3f76e4 | 5 years ago |
zephyrkul | 29bb51c6cd | 5 years ago |
bobloy | a98eb75c0f | 5 years ago |
bobloy | 6e9d31df03 | 5 years ago |
bobloy | f944879896 | 5 years ago |
bobloy | 11a5a7505b | 5 years ago |
bobloy | b9dbad8bd1 | 5 years ago |
bobloy | 942b49e6fc | 5 years ago |
bobloy | 5dcbf562d3 | 5 years ago |
bobloy | 4f6232fb7d | 5 years ago |
bobloy | 50e0eacf24 | 5 years ago |
bobloy | ae915b4fff | 5 years ago |
bobloy | 7e1f59462c | 5 years ago |
bobloy | c7ea0e4d5e | 5 years ago |
bobloy | 035b395eb5 | 5 years ago |
bobloy | 36dc74cfb1 | 5 years ago |
bobloy | ea71aafb52 | 5 years ago |
bobloy | a08b72c83d | 5 years ago |
bobloy | 810670f5c0 | 5 years ago |
bobloy | 9ef5836fa8 | 5 years ago |
bobloy | 849262969c | 6 years ago |
bobloy | 0d4a4071e2 | 6 years ago |
bobloy | 9a3a62f451 | 6 years ago |
bobloy | 53d817756a | 6 years ago |
bobloy | 81cb93a6aa | 6 years ago |
bobloy | 0ee0199b11 | 6 years ago |
bobloy | 0d59a5220f | 6 years ago |
bobloy | 982010f804 | 6 years ago |
@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
name: Bug report
|
||||||
|
about: Create an issue to report a bug
|
||||||
|
title: ''
|
||||||
|
labels: bug
|
||||||
|
assignees: bobloy
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Describe the bug**
|
||||||
|
<!--A clear and concise description of what the bug is.-->
|
||||||
|
|
||||||
|
**To Reproduce**
|
||||||
|
<!--Steps to reproduce the behavior:-->
|
||||||
|
1. Load cog '...'
|
||||||
|
2. Run command '....'
|
||||||
|
3. See error
|
||||||
|
|
||||||
|
**Expected behavior**
|
||||||
|
<!--A clear and concise description of what you expected to happen.-->
|
||||||
|
|
||||||
|
**Screenshots or Error Messages**
|
||||||
|
<!--If applicable, add screenshots to help explain your problem.-->
|
||||||
|
|
||||||
|
**Additional context**
|
||||||
|
<!--Add any other context about the problem here.-->
|
@ -0,0 +1,14 @@
|
|||||||
|
---
|
||||||
|
name: Feature request
|
||||||
|
about: Suggest an idea for this project
|
||||||
|
title: "[Feature Request]"
|
||||||
|
labels: enhancement
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Is your feature request related to a problem? Please describe.**
|
||||||
|
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
||||||
|
|
||||||
|
**Describe the solution you'd like**
|
||||||
|
<!--A clear and concise description of what you want to happen. Include which cog or cogs this would interact with-->
|
@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
name: New AudioTrivia List
|
||||||
|
about: Submit a new AudioTrivia list to be added
|
||||||
|
title: "[AudioTrivia Submission]"
|
||||||
|
labels: 'cog: audiotrivia'
|
||||||
|
assignees: bobloy
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**What is this trivia list?**
|
||||||
|
<!--What's in the list? What kind of category is?-->
|
||||||
|
|
||||||
|
**Number of Questions**
|
||||||
|
<!--Rough estimate at the number of question in this list-->
|
||||||
|
|
||||||
|
**Original Content?**
|
||||||
|
<!--Did you come up with this list yourself or did you get it from some else's work?-->
|
||||||
|
<!--If no, be sure to include the source-->
|
||||||
|
- [ ] Yes
|
||||||
|
- [ ] No
|
||||||
|
|
||||||
|
|
||||||
|
**Did I test the list?**
|
||||||
|
<!--Did you already try out the list and find no bugs?-->
|
||||||
|
- [ ] Yes
|
||||||
|
- [ ] No
|
@ -0,0 +1,62 @@
|
|||||||
|
'cog: announcedaily':
|
||||||
|
- announcedaily/*
|
||||||
|
'cog: audiotrivia':
|
||||||
|
- audiotrivia/*
|
||||||
|
'cog: ccrole':
|
||||||
|
- ccrole/*
|
||||||
|
'cog: chatter':
|
||||||
|
- chatter/*
|
||||||
|
'cog: conquest':
|
||||||
|
- conquest/*
|
||||||
|
'cog: dad':
|
||||||
|
- dad/*
|
||||||
|
'cog: exclusiverole':
|
||||||
|
- exclusiverole/*
|
||||||
|
'cog: fifo':
|
||||||
|
- fifo/*
|
||||||
|
'cog: firstmessage':
|
||||||
|
- firstmessage/*
|
||||||
|
'cog: flag':
|
||||||
|
- flag/*
|
||||||
|
'cog: forcemention':
|
||||||
|
- forcemention/*
|
||||||
|
'cog: hangman':
|
||||||
|
- hangman
|
||||||
|
'cog: infochannel':
|
||||||
|
- infochannel/*
|
||||||
|
'cog: isitdown':
|
||||||
|
- isitdown/*
|
||||||
|
'cog: launchlib':
|
||||||
|
- launchlib/*
|
||||||
|
'cog: leaver':
|
||||||
|
- leaver/*
|
||||||
|
'cog: lovecalculator':
|
||||||
|
- lovecalculator/*
|
||||||
|
'cog: lseen':
|
||||||
|
- lseen/*
|
||||||
|
'cog: nudity':
|
||||||
|
- nudity/*
|
||||||
|
'cog: planttycoon':
|
||||||
|
- planttycoon/*
|
||||||
|
'cog: qrinvite':
|
||||||
|
- qrinvite/*
|
||||||
|
'cog: reactrestrict':
|
||||||
|
- reactrestrict/*
|
||||||
|
'cog: recyclingplant':
|
||||||
|
- recyclingplant/*
|
||||||
|
'cog: rpsls':
|
||||||
|
- rpsls/*
|
||||||
|
'cog: sayurl':
|
||||||
|
- sayurl/*
|
||||||
|
'cog: scp':
|
||||||
|
- scp/*
|
||||||
|
'cog: stealemoji':
|
||||||
|
- stealemoji/*
|
||||||
|
'cog: timerole':
|
||||||
|
- timerole/*
|
||||||
|
'cog: tts':
|
||||||
|
- tts/*
|
||||||
|
'cog: unicode':
|
||||||
|
- unicode/*
|
||||||
|
'cog: werewolf':
|
||||||
|
- werewolf/*
|
@ -0,0 +1,20 @@
|
|||||||
|
# GitHub Action that uses Black to reformat the Python code in an incoming pull request.
|
||||||
|
# If all Python code in the pull request is compliant with Black then this Action does nothing.
|
||||||
|
# Othewrwise, Black is run and its changes are committed back to the incoming pull request.
|
||||||
|
# https://github.com/cclauss/autoblack
|
||||||
|
|
||||||
|
name: black
|
||||||
|
on: [pull_request]
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
- name: Set up Python 3.8
|
||||||
|
uses: actions/setup-python@v2
|
||||||
|
with:
|
||||||
|
python-version: '3.8'
|
||||||
|
- name: Install Black
|
||||||
|
run: pip install --upgrade --no-cache-dir black
|
||||||
|
- name: Run black --check .
|
||||||
|
run: black --check --diff -l 99 .
|
@ -0,0 +1,19 @@
|
|||||||
|
# This workflow will triage pull requests and apply a label based on the
|
||||||
|
# paths that are modified in the pull request.
|
||||||
|
#
|
||||||
|
# To use this workflow, you will need to set up a .github/labeler.yml
|
||||||
|
# file with configuration. For more information, see:
|
||||||
|
# https://github.com/actions/labeler
|
||||||
|
|
||||||
|
name: Labeler
|
||||||
|
on: [pull_request_target]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
label:
|
||||||
|
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/labeler@2.2.0
|
||||||
|
with:
|
||||||
|
repo-token: "${{ secrets.GITHUB_TOKEN }}"
|
@ -1,4 +1,5 @@
|
|||||||
AUTHOR: Plab
|
AUTHOR: Plab
|
||||||
|
AUDIO: "[Audio] Identify this Anime!"
|
||||||
https://www.youtube.com/watch?v=2uq34TeWEdQ:
|
https://www.youtube.com/watch?v=2uq34TeWEdQ:
|
||||||
- 'Hagane no Renkinjutsushi (2009)'
|
- 'Hagane no Renkinjutsushi (2009)'
|
||||||
- '(2009) الخيميائي المعدني الكامل'
|
- '(2009) الخيميائي المعدني الكامل'
|
@ -1,4 +1,5 @@
|
|||||||
AUTHOR: Lazar
|
AUTHOR: Lazar
|
||||||
|
AUDIO: "[Audio] Identify this NHL Team by their goal horn"
|
||||||
https://youtu.be/6OejNXrGkK0:
|
https://youtu.be/6OejNXrGkK0:
|
||||||
- Anaheim Ducks
|
- Anaheim Ducks
|
||||||
- Anaheim
|
- Anaheim
|
@ -1,13 +1,14 @@
|
|||||||
AUTHOR: Plab
|
AUTHOR: Plab
|
||||||
https://www.youtube.com/watch?v=--bWm9hhoZo:
|
NEEDS: New links for all songs.
|
||||||
|
https://www.youtube.com/watch?v=f9O2Rjn1azc:
|
||||||
- Transistor
|
- Transistor
|
||||||
https://www.youtube.com/watch?v=-4nCbgayZNE:
|
https://www.youtube.com/watch?v=PgUhYFkVdSY:
|
||||||
- Dark Cloud 2
|
- Dark Cloud 2
|
||||||
- Dark Cloud II
|
- Dark Cloud II
|
||||||
https://www.youtube.com/watch?v=-64NlME4lJU:
|
https://www.youtube.com/watch?v=1T1RZttyMwU:
|
||||||
- Mega Man 7
|
- Mega Man 7
|
||||||
- Mega Man VII
|
- Mega Man VII
|
||||||
https://www.youtube.com/watch?v=-AesqnudNuw:
|
https://www.youtube.com/watch?v=AdDbbzuq1vY:
|
||||||
- Mega Man 9
|
- Mega Man 9
|
||||||
- Mega Man IX
|
- Mega Man IX
|
||||||
https://www.youtube.com/watch?v=-BmGDtP2t7M:
|
https://www.youtube.com/watch?v=-BmGDtP2t7M:
|
@ -1,12 +0,0 @@
|
|||||||
git+git://github.com/gunthercox/chatterbot-corpus@master#egg=chatterbot_corpus
|
|
||||||
mathparse>=0.1,<0.2
|
|
||||||
nltk>=3.2,<4.0
|
|
||||||
pint>=0.8.1
|
|
||||||
python-dateutil>=2.8,<2.9
|
|
||||||
pyyaml>=5.3,<5.4
|
|
||||||
sqlalchemy>=1.3,<1.4
|
|
||||||
pytz
|
|
||||||
spacy>=2.3,<2.4
|
|
||||||
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm
|
|
||||||
https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.3.1/en_core_web_md-2.3.1.tar.gz#egg=en_core_web_md
|
|
||||||
# https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-2.3.1/en_core_web_lg-2.3.1.tar.gz#egg=en_core_web_lg
|
|
@ -0,0 +1,71 @@
|
|||||||
|
from chatterbot.storage import StorageAdapter, SQLStorageAdapter
|
||||||
|
|
||||||
|
|
||||||
|
class MyDumbSQLStorageAdapter(SQLStorageAdapter):
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
super(SQLStorageAdapter, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
from sqlalchemy import create_engine, inspect
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
|
self.database_uri = kwargs.get("database_uri", False)
|
||||||
|
|
||||||
|
# None results in a sqlite in-memory database as the default
|
||||||
|
if self.database_uri is None:
|
||||||
|
self.database_uri = "sqlite://"
|
||||||
|
|
||||||
|
# Create a file database if the database is not a connection string
|
||||||
|
if not self.database_uri:
|
||||||
|
self.database_uri = "sqlite:///db.sqlite3"
|
||||||
|
|
||||||
|
self.engine = create_engine(self.database_uri, connect_args={"check_same_thread": False})
|
||||||
|
|
||||||
|
if self.database_uri.startswith("sqlite://"):
|
||||||
|
from sqlalchemy.engine import Engine
|
||||||
|
from sqlalchemy import event
|
||||||
|
|
||||||
|
@event.listens_for(Engine, "connect")
|
||||||
|
def set_sqlite_pragma(dbapi_connection, connection_record):
|
||||||
|
dbapi_connection.execute("PRAGMA journal_mode=WAL")
|
||||||
|
dbapi_connection.execute("PRAGMA synchronous=NORMAL")
|
||||||
|
|
||||||
|
if not inspect(self.engine).has_table("Statement"):
|
||||||
|
self.create_database()
|
||||||
|
|
||||||
|
self.Session = sessionmaker(bind=self.engine, expire_on_commit=True)
|
||||||
|
|
||||||
|
|
||||||
|
class AsyncSQLStorageAdapter(SQLStorageAdapter):
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
super(SQLStorageAdapter, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
self.database_uri = kwargs.get("database_uri", False)
|
||||||
|
|
||||||
|
# None results in a sqlite in-memory database as the default
|
||||||
|
if self.database_uri is None:
|
||||||
|
self.database_uri = "sqlite://"
|
||||||
|
|
||||||
|
# Create a file database if the database is not a connection string
|
||||||
|
if not self.database_uri:
|
||||||
|
self.database_uri = "sqlite:///db.sqlite3"
|
||||||
|
|
||||||
|
async def initialize(self):
|
||||||
|
# from sqlalchemy import create_engine
|
||||||
|
from aiomysql.sa import create_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
|
self.engine = await create_engine(self.database_uri, convert_unicode=True)
|
||||||
|
|
||||||
|
if self.database_uri.startswith("sqlite://"):
|
||||||
|
from sqlalchemy.engine import Engine
|
||||||
|
from sqlalchemy import event
|
||||||
|
|
||||||
|
@event.listens_for(Engine, "connect")
|
||||||
|
def set_sqlite_pragma(dbapi_connection, connection_record):
|
||||||
|
dbapi_connection.execute("PRAGMA journal_mode=WAL")
|
||||||
|
dbapi_connection.execute("PRAGMA synchronous=NORMAL")
|
||||||
|
|
||||||
|
if not self.engine.dialect.has_table(self.engine, "Statement"):
|
||||||
|
self.create_database()
|
||||||
|
|
||||||
|
self.Session = sessionmaker(bind=self.engine, expire_on_commit=True)
|
@ -0,0 +1,351 @@
|
|||||||
|
import asyncio
|
||||||
|
import csv
|
||||||
|
import html
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import pathlib
|
||||||
|
import time
|
||||||
|
from functools import partial
|
||||||
|
|
||||||
|
from chatterbot import utils
|
||||||
|
from chatterbot.conversation import Statement
|
||||||
|
from chatterbot.tagging import PosLemmaTagger
|
||||||
|
from chatterbot.trainers import Trainer
|
||||||
|
from redbot.core.bot import Red
|
||||||
|
from dateutil import parser as date_parser
|
||||||
|
from redbot.core.utils import AsyncIter
|
||||||
|
|
||||||
|
log = logging.getLogger("red.fox_v3.chatter.trainers")
|
||||||
|
|
||||||
|
|
||||||
|
class KaggleTrainer(Trainer):
|
||||||
|
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
||||||
|
super().__init__(chatbot, **kwargs)
|
||||||
|
|
||||||
|
self.data_directory = datapath / kwargs.get("downloadpath", "kaggle_download")
|
||||||
|
|
||||||
|
self.kaggle_dataset = kwargs.get(
|
||||||
|
"kaggle_dataset",
|
||||||
|
"Cornell-University/movie-dialog-corpus",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create the data directory if it does not already exist
|
||||||
|
if not os.path.exists(self.data_directory):
|
||||||
|
os.makedirs(self.data_directory)
|
||||||
|
|
||||||
|
def is_downloaded(self, file_path):
|
||||||
|
"""
|
||||||
|
Check if the data file is already downloaded.
|
||||||
|
"""
|
||||||
|
if os.path.exists(file_path):
|
||||||
|
self.chatbot.logger.info("File is already downloaded")
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def download(self, dataset):
|
||||||
|
import kaggle # This triggers the API token check
|
||||||
|
|
||||||
|
future = await asyncio.get_event_loop().run_in_executor(
|
||||||
|
None,
|
||||||
|
partial(
|
||||||
|
kaggle.api.dataset_download_files,
|
||||||
|
dataset=dataset,
|
||||||
|
path=self.data_directory,
|
||||||
|
quiet=False,
|
||||||
|
unzip=True,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
def train(self, *args, **kwargs):
|
||||||
|
log.error("See asynctrain instead")
|
||||||
|
|
||||||
|
def asynctrain(self, *args, **kwargs):
|
||||||
|
raise self.TrainerInitializationException()
|
||||||
|
|
||||||
|
|
||||||
|
class SouthParkTrainer(KaggleTrainer):
|
||||||
|
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
||||||
|
super().__init__(
|
||||||
|
chatbot,
|
||||||
|
datapath,
|
||||||
|
downloadpath="ubuntu_data_v2",
|
||||||
|
kaggle_dataset="tovarischsukhov/southparklines",
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MovieTrainer(KaggleTrainer):
|
||||||
|
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
||||||
|
super().__init__(
|
||||||
|
chatbot,
|
||||||
|
datapath,
|
||||||
|
downloadpath="kaggle_movies",
|
||||||
|
kaggle_dataset="Cornell-University/movie-dialog-corpus",
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def run_movie_training(self):
|
||||||
|
dialogue_file = "movie_lines.tsv"
|
||||||
|
conversation_file = "movie_conversations.tsv"
|
||||||
|
log.info(f"Beginning dialogue training on {dialogue_file}")
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
tagger = PosLemmaTagger(language=self.chatbot.storage.tagger.language)
|
||||||
|
|
||||||
|
# [lineID, characterID, movieID, character name, text of utterance]
|
||||||
|
# File parsing from https://www.kaggle.com/mushaya/conversation-chatbot
|
||||||
|
|
||||||
|
with open(self.data_directory / conversation_file, "r", encoding="utf-8-sig") as conv_tsv:
|
||||||
|
conv_lines = conv_tsv.readlines()
|
||||||
|
with open(self.data_directory / dialogue_file, "r", encoding="utf-8-sig") as lines_tsv:
|
||||||
|
dialog_lines = lines_tsv.readlines()
|
||||||
|
|
||||||
|
# trans_dict = str.maketrans({"<u>": "__", "</u>": "__", '""': '"'})
|
||||||
|
|
||||||
|
lines_dict = {}
|
||||||
|
for line in dialog_lines:
|
||||||
|
_line = line[:-1].strip('"').split("\t")
|
||||||
|
if len(_line) >= 5: # Only good lines
|
||||||
|
lines_dict[_line[0]] = (
|
||||||
|
html.unescape(("".join(_line[4:])).strip())
|
||||||
|
.replace("<u>", "__")
|
||||||
|
.replace("</u>", "__")
|
||||||
|
.replace('""', '"')
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
log.debug(f"Bad line {_line}")
|
||||||
|
|
||||||
|
# collecting line ids for each conversation
|
||||||
|
conv = []
|
||||||
|
for line in conv_lines[:-1]:
|
||||||
|
_line = line[:-1].split("\t")[-1][1:-1].replace("'", "").replace(" ", ",")
|
||||||
|
conv.append(_line.split(","))
|
||||||
|
|
||||||
|
# conversations = csv.reader(conv_tsv, delimiter="\t")
|
||||||
|
#
|
||||||
|
# reader = csv.reader(lines_tsv, delimiter="\t")
|
||||||
|
#
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# lines_dict = {}
|
||||||
|
# for row in reader:
|
||||||
|
# try:
|
||||||
|
# lines_dict[row[0].strip('"')] = row[4]
|
||||||
|
# except:
|
||||||
|
# log.exception(f"Bad line: {row}")
|
||||||
|
# pass
|
||||||
|
# else:
|
||||||
|
# # log.info(f"Good line: {row}")
|
||||||
|
# pass
|
||||||
|
#
|
||||||
|
# # lines_dict = {row[0].strip('"'): row[4] for row in reader_list}
|
||||||
|
|
||||||
|
statements_from_file = []
|
||||||
|
save_every = 300
|
||||||
|
count = 0
|
||||||
|
|
||||||
|
# [characterID of first, characterID of second, movieID, list of utterances]
|
||||||
|
async for lines in AsyncIter(conv):
|
||||||
|
previous_statement_text = None
|
||||||
|
previous_statement_search_text = ""
|
||||||
|
|
||||||
|
for line in lines:
|
||||||
|
text = lines_dict[line]
|
||||||
|
statement = Statement(
|
||||||
|
text=text,
|
||||||
|
in_response_to=previous_statement_text,
|
||||||
|
conversation="training",
|
||||||
|
)
|
||||||
|
|
||||||
|
for preprocessor in self.chatbot.preprocessors:
|
||||||
|
statement = preprocessor(statement)
|
||||||
|
|
||||||
|
statement.search_text = tagger.get_text_index_string(statement.text)
|
||||||
|
statement.search_in_response_to = previous_statement_search_text
|
||||||
|
|
||||||
|
previous_statement_text = statement.text
|
||||||
|
previous_statement_search_text = statement.search_text
|
||||||
|
|
||||||
|
statements_from_file.append(statement)
|
||||||
|
|
||||||
|
count += 1
|
||||||
|
if count >= save_every:
|
||||||
|
if statements_from_file:
|
||||||
|
self.chatbot.storage.create_many(statements_from_file)
|
||||||
|
statements_from_file = []
|
||||||
|
count = 0
|
||||||
|
|
||||||
|
if statements_from_file:
|
||||||
|
self.chatbot.storage.create_many(statements_from_file)
|
||||||
|
|
||||||
|
log.info(f"Training took {time.time() - start_time} seconds.")
|
||||||
|
|
||||||
|
async def asynctrain(self, *args, **kwargs):
|
||||||
|
extracted_lines = self.data_directory / "movie_lines.tsv"
|
||||||
|
extracted_lines: pathlib.Path
|
||||||
|
|
||||||
|
# Download and extract the Ubuntu dialog corpus if needed
|
||||||
|
if not extracted_lines.exists():
|
||||||
|
await self.download(self.kaggle_dataset)
|
||||||
|
else:
|
||||||
|
log.info("Movie dialog already downloaded")
|
||||||
|
if not extracted_lines.exists():
|
||||||
|
raise FileNotFoundError(f"{extracted_lines}")
|
||||||
|
|
||||||
|
await self.run_movie_training()
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
# train_dialogue = kwargs.get("train_dialogue", True)
|
||||||
|
# train_196_dialogue = kwargs.get("train_196", False)
|
||||||
|
# train_301_dialogue = kwargs.get("train_301", False)
|
||||||
|
#
|
||||||
|
# if train_dialogue:
|
||||||
|
# await self.run_dialogue_training(extracted_dir, "dialogueText.csv")
|
||||||
|
#
|
||||||
|
# if train_196_dialogue:
|
||||||
|
# await self.run_dialogue_training(extracted_dir, "dialogueText_196.csv")
|
||||||
|
#
|
||||||
|
# if train_301_dialogue:
|
||||||
|
# await self.run_dialogue_training(extracted_dir, "dialogueText_301.csv")
|
||||||
|
|
||||||
|
|
||||||
|
class UbuntuCorpusTrainer2(KaggleTrainer):
|
||||||
|
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
||||||
|
super().__init__(
|
||||||
|
chatbot,
|
||||||
|
datapath,
|
||||||
|
downloadpath="kaggle_ubuntu",
|
||||||
|
kaggle_dataset="rtatman/ubuntu-dialogue-corpus",
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def asynctrain(self, *args, **kwargs):
|
||||||
|
extracted_dir = self.data_directory / "Ubuntu-dialogue-corpus"
|
||||||
|
|
||||||
|
# Download and extract the Ubuntu dialog corpus if needed
|
||||||
|
if not extracted_dir.exists():
|
||||||
|
await self.download(self.kaggle_dataset)
|
||||||
|
else:
|
||||||
|
log.info("Ubuntu dialogue already downloaded")
|
||||||
|
if not extracted_dir.exists():
|
||||||
|
raise FileNotFoundError("Did not extract in the expected way")
|
||||||
|
|
||||||
|
train_dialogue = kwargs.get("train_dialogue", True)
|
||||||
|
train_196_dialogue = kwargs.get("train_196", False)
|
||||||
|
train_301_dialogue = kwargs.get("train_301", False)
|
||||||
|
|
||||||
|
if train_dialogue:
|
||||||
|
await self.run_dialogue_training(extracted_dir, "dialogueText.csv")
|
||||||
|
|
||||||
|
if train_196_dialogue:
|
||||||
|
await self.run_dialogue_training(extracted_dir, "dialogueText_196.csv")
|
||||||
|
|
||||||
|
if train_301_dialogue:
|
||||||
|
await self.run_dialogue_training(extracted_dir, "dialogueText_301.csv")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def run_dialogue_training(self, extracted_dir, dialogue_file):
|
||||||
|
log.info(f"Beginning dialogue training on {dialogue_file}")
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
tagger = PosLemmaTagger(language=self.chatbot.storage.tagger.language)
|
||||||
|
|
||||||
|
with open(extracted_dir / dialogue_file, "r", encoding="utf-8") as dg:
|
||||||
|
reader = csv.DictReader(dg)
|
||||||
|
|
||||||
|
next(reader) # Skip the header
|
||||||
|
|
||||||
|
last_dialogue_id = None
|
||||||
|
previous_statement_text = None
|
||||||
|
previous_statement_search_text = ""
|
||||||
|
statements_from_file = []
|
||||||
|
|
||||||
|
save_every = 50
|
||||||
|
count = 0
|
||||||
|
|
||||||
|
async for row in AsyncIter(reader):
|
||||||
|
dialogue_id = row["dialogueID"]
|
||||||
|
if dialogue_id != last_dialogue_id:
|
||||||
|
previous_statement_text = None
|
||||||
|
previous_statement_search_text = ""
|
||||||
|
last_dialogue_id = dialogue_id
|
||||||
|
count += 1
|
||||||
|
if count >= save_every:
|
||||||
|
if statements_from_file:
|
||||||
|
self.chatbot.storage.create_many(statements_from_file)
|
||||||
|
statements_from_file = []
|
||||||
|
count = 0
|
||||||
|
|
||||||
|
if len(row) > 0:
|
||||||
|
statement = Statement(
|
||||||
|
text=row["text"],
|
||||||
|
in_response_to=previous_statement_text,
|
||||||
|
conversation="training",
|
||||||
|
# created_at=date_parser.parse(row["date"]),
|
||||||
|
persona=row["from"],
|
||||||
|
)
|
||||||
|
|
||||||
|
for preprocessor in self.chatbot.preprocessors:
|
||||||
|
statement = preprocessor(statement)
|
||||||
|
|
||||||
|
statement.search_text = tagger.get_text_index_string(statement.text)
|
||||||
|
statement.search_in_response_to = previous_statement_search_text
|
||||||
|
|
||||||
|
previous_statement_text = statement.text
|
||||||
|
previous_statement_search_text = statement.search_text
|
||||||
|
|
||||||
|
statements_from_file.append(statement)
|
||||||
|
|
||||||
|
if statements_from_file:
|
||||||
|
self.chatbot.storage.create_many(statements_from_file)
|
||||||
|
|
||||||
|
log.info(f"Training took {time.time() - start_time} seconds.")
|
||||||
|
|
||||||
|
|
||||||
|
class TwitterCorpusTrainer(Trainer):
|
||||||
|
pass
|
||||||
|
# def train(self, *args, **kwargs):
|
||||||
|
# """
|
||||||
|
# Train the chat bot based on the provided list of
|
||||||
|
# statements that represents a single conversation.
|
||||||
|
# """
|
||||||
|
# import twint
|
||||||
|
#
|
||||||
|
# c = twint.Config()
|
||||||
|
# c.__dict__.update(kwargs)
|
||||||
|
# twint.run.Search(c)
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# previous_statement_text = None
|
||||||
|
# previous_statement_search_text = ''
|
||||||
|
#
|
||||||
|
# statements_to_create = []
|
||||||
|
#
|
||||||
|
# for conversation_count, text in enumerate(conversation):
|
||||||
|
# if self.show_training_progress:
|
||||||
|
# utils.print_progress_bar(
|
||||||
|
# 'List Trainer',
|
||||||
|
# conversation_count + 1, len(conversation)
|
||||||
|
# )
|
||||||
|
#
|
||||||
|
# statement_search_text = self.chatbot.storage.tagger.get_text_index_string(text)
|
||||||
|
#
|
||||||
|
# statement = self.get_preprocessed_statement(
|
||||||
|
# Statement(
|
||||||
|
# text=text,
|
||||||
|
# search_text=statement_search_text,
|
||||||
|
# in_response_to=previous_statement_text,
|
||||||
|
# search_in_response_to=previous_statement_search_text,
|
||||||
|
# conversation='training'
|
||||||
|
# )
|
||||||
|
# )
|
||||||
|
#
|
||||||
|
# previous_statement_text = statement.text
|
||||||
|
# previous_statement_search_text = statement_search_text
|
||||||
|
#
|
||||||
|
# statements_to_create.append(statement)
|
||||||
|
#
|
||||||
|
# self.chatbot.storage.create_many(statements_to_create)
|
After Width: | Height: | Size: 4.6 MiB |
After Width: | Height: | Size: 144 KiB |
@ -0,0 +1,15 @@
|
|||||||
|
from redbot.core import data_manager
|
||||||
|
|
||||||
|
from .conquest import Conquest
|
||||||
|
from .mapmaker import MapMaker
|
||||||
|
|
||||||
|
|
||||||
|
async def setup(bot):
|
||||||
|
cog = Conquest(bot)
|
||||||
|
data_manager.bundled_data_path(cog)
|
||||||
|
await cog.load_data()
|
||||||
|
|
||||||
|
bot.add_cog(cog)
|
||||||
|
|
||||||
|
cog2 = MapMaker(bot)
|
||||||
|
bot.add_cog(cog2)
|
@ -0,0 +1,422 @@
|
|||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import pathlib
|
||||||
|
from abc import ABC
|
||||||
|
from shutil import copyfile
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
import discord
|
||||||
|
from PIL import Image, ImageChops, ImageColor, ImageOps
|
||||||
|
from discord.ext.commands import Greedy
|
||||||
|
from redbot.core import Config, commands
|
||||||
|
from redbot.core.bot import Red
|
||||||
|
from redbot.core.data_manager import bundled_data_path, cog_data_path
|
||||||
|
|
||||||
|
log = logging.getLogger("red.fox_v3.conquest")
|
||||||
|
|
||||||
|
|
||||||
|
class Conquest(commands.Cog):
|
||||||
|
"""
|
||||||
|
Cog for
|
||||||
|
"""
|
||||||
|
|
||||||
|
default_zoom_json = {"enabled": False, "x": -1, "y": -1, "zoom": 1.0}
|
||||||
|
|
||||||
|
def __init__(self, bot: Red):
|
||||||
|
super().__init__()
|
||||||
|
self.bot = bot
|
||||||
|
self.config = Config.get_conf(
|
||||||
|
self, identifier=67111110113117101115116, force_registration=True
|
||||||
|
)
|
||||||
|
|
||||||
|
default_guild = {}
|
||||||
|
default_global = {"current_map": None}
|
||||||
|
self.config.register_guild(**default_guild)
|
||||||
|
self.config.register_global(**default_global)
|
||||||
|
|
||||||
|
self.data_path: pathlib.Path = cog_data_path(self)
|
||||||
|
self.asset_path: Optional[pathlib.Path] = None
|
||||||
|
|
||||||
|
self.current_map = None
|
||||||
|
self.map_data = None
|
||||||
|
self.ext = None
|
||||||
|
self.ext_format = None
|
||||||
|
|
||||||
|
async def red_delete_data_for_user(self, **kwargs):
|
||||||
|
"""Nothing to delete"""
|
||||||
|
return
|
||||||
|
|
||||||
|
async def load_data(self):
|
||||||
|
"""
|
||||||
|
Initial loading of data from bundled_data_path and config
|
||||||
|
"""
|
||||||
|
self.asset_path = bundled_data_path(self) / "assets"
|
||||||
|
self.current_map = await self.config.current_map()
|
||||||
|
|
||||||
|
if self.current_map:
|
||||||
|
if not await self.current_map_load():
|
||||||
|
await self.config.current_map.clear()
|
||||||
|
|
||||||
|
async def current_map_load(self):
|
||||||
|
map_data_path = self.asset_path / self.current_map / "data.json"
|
||||||
|
if not map_data_path.exists():
|
||||||
|
log.warning(f"{map_data_path} does not exist. Clearing current map")
|
||||||
|
return False
|
||||||
|
|
||||||
|
with map_data_path.open() as mapdata:
|
||||||
|
self.map_data: dict = json.load(mapdata)
|
||||||
|
self.ext = self.map_data["extension"]
|
||||||
|
self.ext_format = "JPEG" if self.ext.upper() == "JPG" else self.ext.upper()
|
||||||
|
return True
|
||||||
|
|
||||||
|
@commands.group()
|
||||||
|
async def conquest(self, ctx: commands.Context):
|
||||||
|
"""
|
||||||
|
Base command for conquest cog. Start with `[p]conquest set map` to select a map.
|
||||||
|
"""
|
||||||
|
if ctx.invoked_subcommand is None and self.current_map is not None:
|
||||||
|
await self._conquest_current(ctx)
|
||||||
|
|
||||||
|
@conquest.command(name="list")
|
||||||
|
async def _conquest_list(self, ctx: commands.Context):
|
||||||
|
"""
|
||||||
|
List currently available maps
|
||||||
|
"""
|
||||||
|
maps_json = self.asset_path / "maps.json"
|
||||||
|
|
||||||
|
with maps_json.open() as maps:
|
||||||
|
maps_json = json.load(maps)
|
||||||
|
map_list = "\n".join(maps_json["maps"])
|
||||||
|
await ctx.maybe_send_embed(f"Current maps:\n{map_list}")
|
||||||
|
|
||||||
|
@conquest.group(name="set")
|
||||||
|
async def conquest_set(self, ctx: commands.Context):
|
||||||
|
"""Base command for admin actions like selecting a map"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@conquest_set.command(name="resetzoom")
|
||||||
|
async def _conquest_set_resetzoom(self, ctx: commands.Context):
|
||||||
|
"""Resets the zoom level of the current map"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
zoom_json_path = self.data_path / self.current_map / "settings.json"
|
||||||
|
if not zoom_json_path.exists():
|
||||||
|
await ctx.maybe_send_embed(
|
||||||
|
f"No zoom data found for {self.current_map}, reset not needed"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
with zoom_json_path.open("w+") as zoom_json:
|
||||||
|
json.dump({"enabled": False}, zoom_json)
|
||||||
|
|
||||||
|
await ctx.tick()
|
||||||
|
|
||||||
|
@conquest_set.command(name="zoom")
|
||||||
|
async def _conquest_set_zoom(self, ctx: commands.Context, x: int, y: int, zoom: float):
|
||||||
|
"""
|
||||||
|
Set the zoom level and position of the current map
|
||||||
|
|
||||||
|
x: positive integer
|
||||||
|
y: positive integer
|
||||||
|
zoom: float greater than or equal to 1
|
||||||
|
"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
if x < 0 or y < 0 or zoom < 1:
|
||||||
|
await ctx.send_help()
|
||||||
|
return
|
||||||
|
|
||||||
|
zoom_json_path = self.data_path / self.current_map / "settings.json"
|
||||||
|
|
||||||
|
zoom_data = self.default_zoom_json.copy()
|
||||||
|
zoom_data["enabled"] = True
|
||||||
|
zoom_data["x"] = x
|
||||||
|
zoom_data["y"] = y
|
||||||
|
zoom_data["zoom"] = zoom
|
||||||
|
|
||||||
|
with zoom_json_path.open("w+") as zoom_json:
|
||||||
|
json.dump(zoom_data, zoom_json)
|
||||||
|
|
||||||
|
await ctx.tick()
|
||||||
|
|
||||||
|
@conquest_set.command(name="zoomtest")
|
||||||
|
async def _conquest_set_zoomtest(self, ctx: commands.Context, x: int, y: int, zoom: float):
|
||||||
|
"""
|
||||||
|
Test the zoom level and position of the current map
|
||||||
|
|
||||||
|
x: positive integer
|
||||||
|
y: positive integer
|
||||||
|
zoom: float greater than or equal to 1
|
||||||
|
"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
if x < 0 or y < 0 or zoom < 1:
|
||||||
|
await ctx.send_help()
|
||||||
|
return
|
||||||
|
|
||||||
|
zoomed_path = await self._create_zoomed_map(
|
||||||
|
self.data_path / self.current_map / f"current.{self.ext}", x, y, zoom
|
||||||
|
)
|
||||||
|
|
||||||
|
await ctx.send(
|
||||||
|
file=discord.File(
|
||||||
|
fp=zoomed_path,
|
||||||
|
filename=f"current_zoomed.{self.ext}",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _create_zoomed_map(self, map_path, x, y, zoom, **kwargs):
|
||||||
|
current_map = Image.open(map_path)
|
||||||
|
|
||||||
|
w, h = current_map.size
|
||||||
|
zoom2 = zoom * 2
|
||||||
|
zoomed_map = current_map.crop((x - w / zoom2, y - h / zoom2, x + w / zoom2, y + h / zoom2))
|
||||||
|
# zoomed_map = zoomed_map.resize((w, h), Image.LANCZOS)
|
||||||
|
zoomed_map.save(self.data_path / self.current_map / f"zoomed.{self.ext}", self.ext_format)
|
||||||
|
return self.data_path / self.current_map / f"zoomed.{self.ext}"
|
||||||
|
|
||||||
|
@conquest_set.command(name="save")
|
||||||
|
async def _conquest_set_save(self, ctx: commands.Context, *, save_name):
|
||||||
|
"""Save the current map to be loaded later"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
current_map_folder = self.data_path / self.current_map
|
||||||
|
current_map = current_map_folder / f"current.{self.ext}"
|
||||||
|
|
||||||
|
if not current_map_folder.exists() or not current_map.exists():
|
||||||
|
await ctx.maybe_send_embed("Current map doesn't exist! Try setting a new one")
|
||||||
|
return
|
||||||
|
|
||||||
|
copyfile(current_map, current_map_folder / f"{save_name}.{self.ext}")
|
||||||
|
await ctx.tick()
|
||||||
|
|
||||||
|
@conquest_set.command(name="load")
|
||||||
|
async def _conquest_set_load(self, ctx: commands.Context, *, save_name):
|
||||||
|
"""Load a saved map to be the current map"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
current_map_folder = self.data_path / self.current_map
|
||||||
|
current_map = current_map_folder / f"current.{self.ext}"
|
||||||
|
saved_map = current_map_folder / f"{save_name}.{self.ext}"
|
||||||
|
|
||||||
|
if not current_map_folder.exists() or not saved_map.exists():
|
||||||
|
await ctx.maybe_send_embed(f"Saved map not found in the {self.current_map} folder")
|
||||||
|
return
|
||||||
|
|
||||||
|
copyfile(saved_map, current_map)
|
||||||
|
await ctx.tick()
|
||||||
|
|
||||||
|
@conquest_set.command(name="map")
|
||||||
|
async def _conquest_set_map(self, ctx: commands.Context, mapname: str, reset: bool = False):
|
||||||
|
"""
|
||||||
|
Select a map from current available maps
|
||||||
|
|
||||||
|
To add more maps, see the guide (WIP)
|
||||||
|
"""
|
||||||
|
map_dir = self.asset_path / mapname
|
||||||
|
if not map_dir.exists() or not map_dir.is_dir():
|
||||||
|
await ctx.maybe_send_embed(
|
||||||
|
f"Map `{mapname}` was not found in the {self.asset_path} directory"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
self.current_map = mapname
|
||||||
|
await self.config.current_map.set(self.current_map) # Save to config too
|
||||||
|
|
||||||
|
await self.current_map_load()
|
||||||
|
|
||||||
|
# map_data_path = self.asset_path / mapname / "data.json"
|
||||||
|
# with map_data_path.open() as mapdata:
|
||||||
|
# self.map_data = json.load(mapdata)
|
||||||
|
#
|
||||||
|
# self.ext = self.map_data["extension"]
|
||||||
|
|
||||||
|
current_map_folder = self.data_path / self.current_map
|
||||||
|
current_map = current_map_folder / f"current.{self.ext}"
|
||||||
|
|
||||||
|
if not reset and current_map.exists():
|
||||||
|
await ctx.maybe_send_embed(
|
||||||
|
"This map is already in progress, resuming from last game\n"
|
||||||
|
"Use `[p]conquest set map [mapname] True` to start a new game"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
if not current_map_folder.exists():
|
||||||
|
os.makedirs(current_map_folder)
|
||||||
|
copyfile(self.asset_path / mapname / f"blank.{self.ext}", current_map)
|
||||||
|
|
||||||
|
await ctx.tick()
|
||||||
|
|
||||||
|
@conquest.command(name="current")
|
||||||
|
async def _conquest_current(self, ctx: commands.Context):
|
||||||
|
"""
|
||||||
|
Send the current map.
|
||||||
|
"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
current_img = self.data_path / self.current_map / f"current.{self.ext}"
|
||||||
|
|
||||||
|
await self._send_maybe_zoomed_map(ctx, current_img, f"current_map.{self.ext}")
|
||||||
|
|
||||||
|
async def _send_maybe_zoomed_map(self, ctx, map_path, filename):
|
||||||
|
zoom_data = {"enabled": False}
|
||||||
|
|
||||||
|
zoom_json_path = self.data_path / self.current_map / "settings.json"
|
||||||
|
|
||||||
|
if zoom_json_path.exists():
|
||||||
|
with zoom_json_path.open() as zoom_json:
|
||||||
|
zoom_data = json.load(zoom_json)
|
||||||
|
|
||||||
|
if zoom_data["enabled"]:
|
||||||
|
map_path = await self._create_zoomed_map(map_path, **zoom_data)
|
||||||
|
|
||||||
|
await ctx.send(file=discord.File(fp=map_path, filename=filename))
|
||||||
|
|
||||||
|
@conquest.command("blank")
|
||||||
|
async def _conquest_blank(self, ctx: commands.Context):
|
||||||
|
"""
|
||||||
|
Print the blank version of the current map, for reference.
|
||||||
|
"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
current_blank_img = self.asset_path / self.current_map / f"blank.{self.ext}"
|
||||||
|
|
||||||
|
await self._send_maybe_zoomed_map(ctx, current_blank_img, f"blank_map.{self.ext}")
|
||||||
|
|
||||||
|
@conquest.command("numbered")
|
||||||
|
async def _conquest_numbered(self, ctx: commands.Context):
|
||||||
|
"""
|
||||||
|
Print the numbered version of the current map, for reference.
|
||||||
|
"""
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
numbers_path = self.asset_path / self.current_map / f"numbers.{self.ext}"
|
||||||
|
if not numbers_path.exists():
|
||||||
|
await ctx.send(
|
||||||
|
file=discord.File(
|
||||||
|
fp=self.asset_path / self.current_map / f"numbered.{self.ext}",
|
||||||
|
filename=f"numbered.{self.ext}",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
current_map = Image.open(self.data_path / self.current_map / f"current.{self.ext}")
|
||||||
|
numbers = Image.open(numbers_path).convert("L")
|
||||||
|
|
||||||
|
inverted_map = ImageOps.invert(current_map)
|
||||||
|
|
||||||
|
loop = asyncio.get_running_loop()
|
||||||
|
current_numbered_img = await loop.run_in_executor(
|
||||||
|
None, Image.composite, current_map, inverted_map, numbers
|
||||||
|
)
|
||||||
|
|
||||||
|
current_numbered_img.save(
|
||||||
|
self.data_path / self.current_map / f"current_numbered.{self.ext}", self.ext_format
|
||||||
|
)
|
||||||
|
|
||||||
|
await self._send_maybe_zoomed_map(
|
||||||
|
ctx,
|
||||||
|
self.data_path / self.current_map / f"current_numbered.{self.ext}",
|
||||||
|
f"current_numbered.{self.ext}",
|
||||||
|
)
|
||||||
|
|
||||||
|
@conquest.command(name="multitake")
|
||||||
|
async def _conquest_multitake(
|
||||||
|
self, ctx: commands.Context, start_region: int, end_region: int, color: str
|
||||||
|
):
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
color = ImageColor.getrgb(color)
|
||||||
|
except ValueError:
|
||||||
|
await ctx.maybe_send_embed(f"Invalid color {color}")
|
||||||
|
return
|
||||||
|
|
||||||
|
if end_region > self.map_data["region_max"] or start_region < 1:
|
||||||
|
await ctx.maybe_send_embed(
|
||||||
|
f"Max region number is {self.map_data['region_max']}, minimum is 1"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
regions = [r for r in range(start_region, end_region + 1)]
|
||||||
|
|
||||||
|
await self._process_take_regions(color, ctx, regions)
|
||||||
|
|
||||||
|
async def _process_take_regions(self, color, ctx, regions):
|
||||||
|
current_img_path = self.data_path / self.current_map / f"current.{self.ext}"
|
||||||
|
im = Image.open(current_img_path)
|
||||||
|
async with ctx.typing():
|
||||||
|
out: Image.Image = await self._composite_regions(im, regions, color)
|
||||||
|
out.save(current_img_path, self.ext_format)
|
||||||
|
await self._send_maybe_zoomed_map(ctx, current_img_path, f"map.{self.ext}")
|
||||||
|
|
||||||
|
@conquest.command(name="take")
|
||||||
|
async def _conquest_take(self, ctx: commands.Context, regions: Greedy[int], *, color: str):
|
||||||
|
"""
|
||||||
|
Claim a territory or list of territories for a specified color
|
||||||
|
|
||||||
|
:param regions: List of integer regions
|
||||||
|
:param color: Color to claim regions
|
||||||
|
"""
|
||||||
|
if not regions:
|
||||||
|
await ctx.send_help()
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.current_map is None:
|
||||||
|
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
color = ImageColor.getrgb(color)
|
||||||
|
except ValueError:
|
||||||
|
await ctx.maybe_send_embed(f"Invalid color {color}")
|
||||||
|
return
|
||||||
|
|
||||||
|
for region in regions:
|
||||||
|
if region > self.map_data["region_max"] or region < 1:
|
||||||
|
await ctx.maybe_send_embed(
|
||||||
|
f"Max region number is {self.map_data['region_max']}, minimum is 1"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
await self._process_take_regions(color, ctx, regions)
|
||||||
|
|
||||||
|
async def _composite_regions(self, im, regions, color) -> Image.Image:
|
||||||
|
im2 = Image.new("RGB", im.size, color)
|
||||||
|
|
||||||
|
loop = asyncio.get_running_loop()
|
||||||
|
|
||||||
|
combined_mask = None
|
||||||
|
for region in regions:
|
||||||
|
mask = Image.open(
|
||||||
|
self.asset_path / self.current_map / "masks" / f"{region}.{self.ext}"
|
||||||
|
).convert("L")
|
||||||
|
if combined_mask is None:
|
||||||
|
combined_mask = mask
|
||||||
|
else:
|
||||||
|
# combined_mask = ImageChops.logical_or(combined_mask, mask)
|
||||||
|
combined_mask = await loop.run_in_executor(
|
||||||
|
None, ImageChops.multiply, combined_mask, mask
|
||||||
|
)
|
||||||
|
|
||||||
|
out = await loop.run_in_executor(None, Image.composite, im, im2, combined_mask)
|
||||||
|
|
||||||
|
return out
|
After Width: | Height: | Size: 400 KiB |
@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"region_max": 70
|
||||||
|
}
|
After Width: | Height: | Size: 480 KiB |
After Width: | Height: | Size: 345 KiB |
@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"region_max": 70
|
||||||
|
}
|
After Width: | Height: | Size: 413 KiB |
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"maps": [
|
||||||
|
"simple",
|
||||||
|
"ck2",
|
||||||
|
"HoI"
|
||||||
|
]
|
||||||
|
}
|
After Width: | Height: | Size: 312 KiB |
@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"region_max": 70,
|
||||||
|
"extension": "jpg"
|
||||||
|
}
|
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 29 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 56 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 26 KiB |