Compare commits
4 Commits
master
...
sherlock_i
Author | SHA1 | Date |
---|---|---|
bobloy | b9bf89b799 | 5 years ago |
bobloy | 8de1aa2082 | 5 years ago |
bobloy | 0fef7c899c | 5 years ago |
bobloy | 52a18a5b52 | 5 years ago |
@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Create an issue to report a bug
|
|
||||||
title: ''
|
|
||||||
labels: bug
|
|
||||||
assignees: bobloy
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Describe the bug**
|
|
||||||
<!--A clear and concise description of what the bug is.-->
|
|
||||||
|
|
||||||
**To Reproduce**
|
|
||||||
<!--Steps to reproduce the behavior:-->
|
|
||||||
1. Load cog '...'
|
|
||||||
2. Run command '....'
|
|
||||||
3. See error
|
|
||||||
|
|
||||||
**Expected behavior**
|
|
||||||
<!--A clear and concise description of what you expected to happen.-->
|
|
||||||
|
|
||||||
**Screenshots or Error Messages**
|
|
||||||
<!--If applicable, add screenshots to help explain your problem.-->
|
|
||||||
|
|
||||||
**Additional context**
|
|
||||||
<!--Add any other context about the problem here.-->
|
|
@ -1,14 +0,0 @@
|
|||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Suggest an idea for this project
|
|
||||||
title: "[Feature Request]"
|
|
||||||
labels: enhancement
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Is your feature request related to a problem? Please describe.**
|
|
||||||
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
|
||||||
|
|
||||||
**Describe the solution you'd like**
|
|
||||||
<!--A clear and concise description of what you want to happen. Include which cog or cogs this would interact with-->
|
|
@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
name: New AudioTrivia List
|
|
||||||
about: Submit a new AudioTrivia list to be added
|
|
||||||
title: "[AudioTrivia Submission]"
|
|
||||||
labels: 'cog: audiotrivia'
|
|
||||||
assignees: bobloy
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**What is this trivia list?**
|
|
||||||
<!--What's in the list? What kind of category is?-->
|
|
||||||
|
|
||||||
**Number of Questions**
|
|
||||||
<!--Rough estimate at the number of question in this list-->
|
|
||||||
|
|
||||||
**Original Content?**
|
|
||||||
<!--Did you come up with this list yourself or did you get it from some else's work?-->
|
|
||||||
<!--If no, be sure to include the source-->
|
|
||||||
- [ ] Yes
|
|
||||||
- [ ] No
|
|
||||||
|
|
||||||
|
|
||||||
**Did I test the list?**
|
|
||||||
<!--Did you already try out the list and find no bugs?-->
|
|
||||||
- [ ] Yes
|
|
||||||
- [ ] No
|
|
@ -1,62 +0,0 @@
|
|||||||
'cog: announcedaily':
|
|
||||||
- announcedaily/*
|
|
||||||
'cog: audiotrivia':
|
|
||||||
- audiotrivia/*
|
|
||||||
'cog: ccrole':
|
|
||||||
- ccrole/*
|
|
||||||
'cog: chatter':
|
|
||||||
- chatter/*
|
|
||||||
'cog: conquest':
|
|
||||||
- conquest/*
|
|
||||||
'cog: dad':
|
|
||||||
- dad/*
|
|
||||||
'cog: exclusiverole':
|
|
||||||
- exclusiverole/*
|
|
||||||
'cog: fifo':
|
|
||||||
- fifo/*
|
|
||||||
'cog: firstmessage':
|
|
||||||
- firstmessage/*
|
|
||||||
'cog: flag':
|
|
||||||
- flag/*
|
|
||||||
'cog: forcemention':
|
|
||||||
- forcemention/*
|
|
||||||
'cog: hangman':
|
|
||||||
- hangman
|
|
||||||
'cog: infochannel':
|
|
||||||
- infochannel/*
|
|
||||||
'cog: isitdown':
|
|
||||||
- isitdown/*
|
|
||||||
'cog: launchlib':
|
|
||||||
- launchlib/*
|
|
||||||
'cog: leaver':
|
|
||||||
- leaver/*
|
|
||||||
'cog: lovecalculator':
|
|
||||||
- lovecalculator/*
|
|
||||||
'cog: lseen':
|
|
||||||
- lseen/*
|
|
||||||
'cog: nudity':
|
|
||||||
- nudity/*
|
|
||||||
'cog: planttycoon':
|
|
||||||
- planttycoon/*
|
|
||||||
'cog: qrinvite':
|
|
||||||
- qrinvite/*
|
|
||||||
'cog: reactrestrict':
|
|
||||||
- reactrestrict/*
|
|
||||||
'cog: recyclingplant':
|
|
||||||
- recyclingplant/*
|
|
||||||
'cog: rpsls':
|
|
||||||
- rpsls/*
|
|
||||||
'cog: sayurl':
|
|
||||||
- sayurl/*
|
|
||||||
'cog: scp':
|
|
||||||
- scp/*
|
|
||||||
'cog: stealemoji':
|
|
||||||
- stealemoji/*
|
|
||||||
'cog: timerole':
|
|
||||||
- timerole/*
|
|
||||||
'cog: tts':
|
|
||||||
- tts/*
|
|
||||||
'cog: unicode':
|
|
||||||
- unicode/*
|
|
||||||
'cog: werewolf':
|
|
||||||
- werewolf/*
|
|
@ -1,20 +0,0 @@
|
|||||||
# GitHub Action that uses Black to reformat the Python code in an incoming pull request.
|
|
||||||
# If all Python code in the pull request is compliant with Black then this Action does nothing.
|
|
||||||
# Othewrwise, Black is run and its changes are committed back to the incoming pull request.
|
|
||||||
# https://github.com/cclauss/autoblack
|
|
||||||
|
|
||||||
name: black
|
|
||||||
on: [pull_request]
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v2
|
|
||||||
- name: Set up Python 3.8
|
|
||||||
uses: actions/setup-python@v2
|
|
||||||
with:
|
|
||||||
python-version: '3.8'
|
|
||||||
- name: Install Black
|
|
||||||
run: pip install --upgrade --no-cache-dir black
|
|
||||||
- name: Run black --check .
|
|
||||||
run: black --check --diff -l 99 .
|
|
@ -1,19 +0,0 @@
|
|||||||
# This workflow will triage pull requests and apply a label based on the
|
|
||||||
# paths that are modified in the pull request.
|
|
||||||
#
|
|
||||||
# To use this workflow, you will need to set up a .github/labeler.yml
|
|
||||||
# file with configuration. For more information, see:
|
|
||||||
# https://github.com/actions/labeler
|
|
||||||
|
|
||||||
name: Labeler
|
|
||||||
on: [pull_request_target]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
label:
|
|
||||||
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/labeler@2.2.0
|
|
||||||
with:
|
|
||||||
repo-token: "${{ secrets.GITHUB_TOKEN }}"
|
|
@ -1,5 +1,4 @@
|
|||||||
AUTHOR: Plab
|
AUTHOR: Plab
|
||||||
AUDIO: "[Audio] Identify this Anime!"
|
|
||||||
https://www.youtube.com/watch?v=2uq34TeWEdQ:
|
https://www.youtube.com/watch?v=2uq34TeWEdQ:
|
||||||
- 'Hagane no Renkinjutsushi (2009)'
|
- 'Hagane no Renkinjutsushi (2009)'
|
||||||
- '(2009) الخيميائي المعدني الكامل'
|
- '(2009) الخيميائي المعدني الكامل'
|
@ -1,14 +1,13 @@
|
|||||||
AUTHOR: Plab
|
AUTHOR: Plab
|
||||||
NEEDS: New links for all songs.
|
https://www.youtube.com/watch?v=--bWm9hhoZo:
|
||||||
https://www.youtube.com/watch?v=f9O2Rjn1azc:
|
|
||||||
- Transistor
|
- Transistor
|
||||||
https://www.youtube.com/watch?v=PgUhYFkVdSY:
|
https://www.youtube.com/watch?v=-4nCbgayZNE:
|
||||||
- Dark Cloud 2
|
- Dark Cloud 2
|
||||||
- Dark Cloud II
|
- Dark Cloud II
|
||||||
https://www.youtube.com/watch?v=1T1RZttyMwU:
|
https://www.youtube.com/watch?v=-64NlME4lJU:
|
||||||
- Mega Man 7
|
- Mega Man 7
|
||||||
- Mega Man VII
|
- Mega Man VII
|
||||||
https://www.youtube.com/watch?v=AdDbbzuq1vY:
|
https://www.youtube.com/watch?v=-AesqnudNuw:
|
||||||
- Mega Man 9
|
- Mega Man 9
|
||||||
- Mega Man IX
|
- Mega Man IX
|
||||||
https://www.youtube.com/watch?v=-BmGDtP2t7M:
|
https://www.youtube.com/watch?v=-BmGDtP2t7M:
|
@ -1,5 +1,4 @@
|
|||||||
AUTHOR: Lazar
|
AUTHOR: Lazar
|
||||||
AUDIO: "[Audio] Identify this NHL Team by their goal horn"
|
|
||||||
https://youtu.be/6OejNXrGkK0:
|
https://youtu.be/6OejNXrGkK0:
|
||||||
- Anaheim Ducks
|
- Anaheim Ducks
|
||||||
- Anaheim
|
- Anaheim
|
@ -0,0 +1,12 @@
|
|||||||
|
git+git://github.com/gunthercox/chatterbot-corpus@master#egg=chatterbot_corpus
|
||||||
|
mathparse>=0.1,<0.2
|
||||||
|
nltk>=3.2,<4.0
|
||||||
|
pint>=0.8.1
|
||||||
|
python-dateutil>=2.8,<2.9
|
||||||
|
pyyaml>=5.3,<5.4
|
||||||
|
sqlalchemy>=1.3,<1.4
|
||||||
|
pytz
|
||||||
|
spacy>=2.3,<2.4
|
||||||
|
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm
|
||||||
|
https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.3.1/en_core_web_md-2.3.1.tar.gz#egg=en_core_web_md
|
||||||
|
# https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-2.3.1/en_core_web_lg-2.3.1.tar.gz#egg=en_core_web_lg
|
@ -1,71 +0,0 @@
|
|||||||
from chatterbot.storage import StorageAdapter, SQLStorageAdapter
|
|
||||||
|
|
||||||
|
|
||||||
class MyDumbSQLStorageAdapter(SQLStorageAdapter):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
super(SQLStorageAdapter, self).__init__(**kwargs)
|
|
||||||
|
|
||||||
from sqlalchemy import create_engine, inspect
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
self.database_uri = kwargs.get("database_uri", False)
|
|
||||||
|
|
||||||
# None results in a sqlite in-memory database as the default
|
|
||||||
if self.database_uri is None:
|
|
||||||
self.database_uri = "sqlite://"
|
|
||||||
|
|
||||||
# Create a file database if the database is not a connection string
|
|
||||||
if not self.database_uri:
|
|
||||||
self.database_uri = "sqlite:///db.sqlite3"
|
|
||||||
|
|
||||||
self.engine = create_engine(self.database_uri, connect_args={"check_same_thread": False})
|
|
||||||
|
|
||||||
if self.database_uri.startswith("sqlite://"):
|
|
||||||
from sqlalchemy.engine import Engine
|
|
||||||
from sqlalchemy import event
|
|
||||||
|
|
||||||
@event.listens_for(Engine, "connect")
|
|
||||||
def set_sqlite_pragma(dbapi_connection, connection_record):
|
|
||||||
dbapi_connection.execute("PRAGMA journal_mode=WAL")
|
|
||||||
dbapi_connection.execute("PRAGMA synchronous=NORMAL")
|
|
||||||
|
|
||||||
if not inspect(self.engine).has_table("Statement"):
|
|
||||||
self.create_database()
|
|
||||||
|
|
||||||
self.Session = sessionmaker(bind=self.engine, expire_on_commit=True)
|
|
||||||
|
|
||||||
|
|
||||||
class AsyncSQLStorageAdapter(SQLStorageAdapter):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
super(SQLStorageAdapter, self).__init__(**kwargs)
|
|
||||||
|
|
||||||
self.database_uri = kwargs.get("database_uri", False)
|
|
||||||
|
|
||||||
# None results in a sqlite in-memory database as the default
|
|
||||||
if self.database_uri is None:
|
|
||||||
self.database_uri = "sqlite://"
|
|
||||||
|
|
||||||
# Create a file database if the database is not a connection string
|
|
||||||
if not self.database_uri:
|
|
||||||
self.database_uri = "sqlite:///db.sqlite3"
|
|
||||||
|
|
||||||
async def initialize(self):
|
|
||||||
# from sqlalchemy import create_engine
|
|
||||||
from aiomysql.sa import create_engine
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
self.engine = await create_engine(self.database_uri, convert_unicode=True)
|
|
||||||
|
|
||||||
if self.database_uri.startswith("sqlite://"):
|
|
||||||
from sqlalchemy.engine import Engine
|
|
||||||
from sqlalchemy import event
|
|
||||||
|
|
||||||
@event.listens_for(Engine, "connect")
|
|
||||||
def set_sqlite_pragma(dbapi_connection, connection_record):
|
|
||||||
dbapi_connection.execute("PRAGMA journal_mode=WAL")
|
|
||||||
dbapi_connection.execute("PRAGMA synchronous=NORMAL")
|
|
||||||
|
|
||||||
if not self.engine.dialect.has_table(self.engine, "Statement"):
|
|
||||||
self.create_database()
|
|
||||||
|
|
||||||
self.Session = sessionmaker(bind=self.engine, expire_on_commit=True)
|
|
@ -1,351 +0,0 @@
|
|||||||
import asyncio
|
|
||||||
import csv
|
|
||||||
import html
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import pathlib
|
|
||||||
import time
|
|
||||||
from functools import partial
|
|
||||||
|
|
||||||
from chatterbot import utils
|
|
||||||
from chatterbot.conversation import Statement
|
|
||||||
from chatterbot.tagging import PosLemmaTagger
|
|
||||||
from chatterbot.trainers import Trainer
|
|
||||||
from redbot.core.bot import Red
|
|
||||||
from dateutil import parser as date_parser
|
|
||||||
from redbot.core.utils import AsyncIter
|
|
||||||
|
|
||||||
log = logging.getLogger("red.fox_v3.chatter.trainers")
|
|
||||||
|
|
||||||
|
|
||||||
class KaggleTrainer(Trainer):
|
|
||||||
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
|
||||||
super().__init__(chatbot, **kwargs)
|
|
||||||
|
|
||||||
self.data_directory = datapath / kwargs.get("downloadpath", "kaggle_download")
|
|
||||||
|
|
||||||
self.kaggle_dataset = kwargs.get(
|
|
||||||
"kaggle_dataset",
|
|
||||||
"Cornell-University/movie-dialog-corpus",
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create the data directory if it does not already exist
|
|
||||||
if not os.path.exists(self.data_directory):
|
|
||||||
os.makedirs(self.data_directory)
|
|
||||||
|
|
||||||
def is_downloaded(self, file_path):
|
|
||||||
"""
|
|
||||||
Check if the data file is already downloaded.
|
|
||||||
"""
|
|
||||||
if os.path.exists(file_path):
|
|
||||||
self.chatbot.logger.info("File is already downloaded")
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
async def download(self, dataset):
|
|
||||||
import kaggle # This triggers the API token check
|
|
||||||
|
|
||||||
future = await asyncio.get_event_loop().run_in_executor(
|
|
||||||
None,
|
|
||||||
partial(
|
|
||||||
kaggle.api.dataset_download_files,
|
|
||||||
dataset=dataset,
|
|
||||||
path=self.data_directory,
|
|
||||||
quiet=False,
|
|
||||||
unzip=True,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
def train(self, *args, **kwargs):
|
|
||||||
log.error("See asynctrain instead")
|
|
||||||
|
|
||||||
def asynctrain(self, *args, **kwargs):
|
|
||||||
raise self.TrainerInitializationException()
|
|
||||||
|
|
||||||
|
|
||||||
class SouthParkTrainer(KaggleTrainer):
|
|
||||||
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
|
||||||
super().__init__(
|
|
||||||
chatbot,
|
|
||||||
datapath,
|
|
||||||
downloadpath="ubuntu_data_v2",
|
|
||||||
kaggle_dataset="tovarischsukhov/southparklines",
|
|
||||||
**kwargs,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class MovieTrainer(KaggleTrainer):
|
|
||||||
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
|
||||||
super().__init__(
|
|
||||||
chatbot,
|
|
||||||
datapath,
|
|
||||||
downloadpath="kaggle_movies",
|
|
||||||
kaggle_dataset="Cornell-University/movie-dialog-corpus",
|
|
||||||
**kwargs,
|
|
||||||
)
|
|
||||||
|
|
||||||
async def run_movie_training(self):
|
|
||||||
dialogue_file = "movie_lines.tsv"
|
|
||||||
conversation_file = "movie_conversations.tsv"
|
|
||||||
log.info(f"Beginning dialogue training on {dialogue_file}")
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
tagger = PosLemmaTagger(language=self.chatbot.storage.tagger.language)
|
|
||||||
|
|
||||||
# [lineID, characterID, movieID, character name, text of utterance]
|
|
||||||
# File parsing from https://www.kaggle.com/mushaya/conversation-chatbot
|
|
||||||
|
|
||||||
with open(self.data_directory / conversation_file, "r", encoding="utf-8-sig") as conv_tsv:
|
|
||||||
conv_lines = conv_tsv.readlines()
|
|
||||||
with open(self.data_directory / dialogue_file, "r", encoding="utf-8-sig") as lines_tsv:
|
|
||||||
dialog_lines = lines_tsv.readlines()
|
|
||||||
|
|
||||||
# trans_dict = str.maketrans({"<u>": "__", "</u>": "__", '""': '"'})
|
|
||||||
|
|
||||||
lines_dict = {}
|
|
||||||
for line in dialog_lines:
|
|
||||||
_line = line[:-1].strip('"').split("\t")
|
|
||||||
if len(_line) >= 5: # Only good lines
|
|
||||||
lines_dict[_line[0]] = (
|
|
||||||
html.unescape(("".join(_line[4:])).strip())
|
|
||||||
.replace("<u>", "__")
|
|
||||||
.replace("</u>", "__")
|
|
||||||
.replace('""', '"')
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
log.debug(f"Bad line {_line}")
|
|
||||||
|
|
||||||
# collecting line ids for each conversation
|
|
||||||
conv = []
|
|
||||||
for line in conv_lines[:-1]:
|
|
||||||
_line = line[:-1].split("\t")[-1][1:-1].replace("'", "").replace(" ", ",")
|
|
||||||
conv.append(_line.split(","))
|
|
||||||
|
|
||||||
# conversations = csv.reader(conv_tsv, delimiter="\t")
|
|
||||||
#
|
|
||||||
# reader = csv.reader(lines_tsv, delimiter="\t")
|
|
||||||
#
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# lines_dict = {}
|
|
||||||
# for row in reader:
|
|
||||||
# try:
|
|
||||||
# lines_dict[row[0].strip('"')] = row[4]
|
|
||||||
# except:
|
|
||||||
# log.exception(f"Bad line: {row}")
|
|
||||||
# pass
|
|
||||||
# else:
|
|
||||||
# # log.info(f"Good line: {row}")
|
|
||||||
# pass
|
|
||||||
#
|
|
||||||
# # lines_dict = {row[0].strip('"'): row[4] for row in reader_list}
|
|
||||||
|
|
||||||
statements_from_file = []
|
|
||||||
save_every = 300
|
|
||||||
count = 0
|
|
||||||
|
|
||||||
# [characterID of first, characterID of second, movieID, list of utterances]
|
|
||||||
async for lines in AsyncIter(conv):
|
|
||||||
previous_statement_text = None
|
|
||||||
previous_statement_search_text = ""
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
text = lines_dict[line]
|
|
||||||
statement = Statement(
|
|
||||||
text=text,
|
|
||||||
in_response_to=previous_statement_text,
|
|
||||||
conversation="training",
|
|
||||||
)
|
|
||||||
|
|
||||||
for preprocessor in self.chatbot.preprocessors:
|
|
||||||
statement = preprocessor(statement)
|
|
||||||
|
|
||||||
statement.search_text = tagger.get_text_index_string(statement.text)
|
|
||||||
statement.search_in_response_to = previous_statement_search_text
|
|
||||||
|
|
||||||
previous_statement_text = statement.text
|
|
||||||
previous_statement_search_text = statement.search_text
|
|
||||||
|
|
||||||
statements_from_file.append(statement)
|
|
||||||
|
|
||||||
count += 1
|
|
||||||
if count >= save_every:
|
|
||||||
if statements_from_file:
|
|
||||||
self.chatbot.storage.create_many(statements_from_file)
|
|
||||||
statements_from_file = []
|
|
||||||
count = 0
|
|
||||||
|
|
||||||
if statements_from_file:
|
|
||||||
self.chatbot.storage.create_many(statements_from_file)
|
|
||||||
|
|
||||||
log.info(f"Training took {time.time() - start_time} seconds.")
|
|
||||||
|
|
||||||
async def asynctrain(self, *args, **kwargs):
|
|
||||||
extracted_lines = self.data_directory / "movie_lines.tsv"
|
|
||||||
extracted_lines: pathlib.Path
|
|
||||||
|
|
||||||
# Download and extract the Ubuntu dialog corpus if needed
|
|
||||||
if not extracted_lines.exists():
|
|
||||||
await self.download(self.kaggle_dataset)
|
|
||||||
else:
|
|
||||||
log.info("Movie dialog already downloaded")
|
|
||||||
if not extracted_lines.exists():
|
|
||||||
raise FileNotFoundError(f"{extracted_lines}")
|
|
||||||
|
|
||||||
await self.run_movie_training()
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
# train_dialogue = kwargs.get("train_dialogue", True)
|
|
||||||
# train_196_dialogue = kwargs.get("train_196", False)
|
|
||||||
# train_301_dialogue = kwargs.get("train_301", False)
|
|
||||||
#
|
|
||||||
# if train_dialogue:
|
|
||||||
# await self.run_dialogue_training(extracted_dir, "dialogueText.csv")
|
|
||||||
#
|
|
||||||
# if train_196_dialogue:
|
|
||||||
# await self.run_dialogue_training(extracted_dir, "dialogueText_196.csv")
|
|
||||||
#
|
|
||||||
# if train_301_dialogue:
|
|
||||||
# await self.run_dialogue_training(extracted_dir, "dialogueText_301.csv")
|
|
||||||
|
|
||||||
|
|
||||||
class UbuntuCorpusTrainer2(KaggleTrainer):
|
|
||||||
def __init__(self, chatbot, datapath: pathlib.Path, **kwargs):
|
|
||||||
super().__init__(
|
|
||||||
chatbot,
|
|
||||||
datapath,
|
|
||||||
downloadpath="kaggle_ubuntu",
|
|
||||||
kaggle_dataset="rtatman/ubuntu-dialogue-corpus",
|
|
||||||
**kwargs,
|
|
||||||
)
|
|
||||||
|
|
||||||
async def asynctrain(self, *args, **kwargs):
|
|
||||||
extracted_dir = self.data_directory / "Ubuntu-dialogue-corpus"
|
|
||||||
|
|
||||||
# Download and extract the Ubuntu dialog corpus if needed
|
|
||||||
if not extracted_dir.exists():
|
|
||||||
await self.download(self.kaggle_dataset)
|
|
||||||
else:
|
|
||||||
log.info("Ubuntu dialogue already downloaded")
|
|
||||||
if not extracted_dir.exists():
|
|
||||||
raise FileNotFoundError("Did not extract in the expected way")
|
|
||||||
|
|
||||||
train_dialogue = kwargs.get("train_dialogue", True)
|
|
||||||
train_196_dialogue = kwargs.get("train_196", False)
|
|
||||||
train_301_dialogue = kwargs.get("train_301", False)
|
|
||||||
|
|
||||||
if train_dialogue:
|
|
||||||
await self.run_dialogue_training(extracted_dir, "dialogueText.csv")
|
|
||||||
|
|
||||||
if train_196_dialogue:
|
|
||||||
await self.run_dialogue_training(extracted_dir, "dialogueText_196.csv")
|
|
||||||
|
|
||||||
if train_301_dialogue:
|
|
||||||
await self.run_dialogue_training(extracted_dir, "dialogueText_301.csv")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
async def run_dialogue_training(self, extracted_dir, dialogue_file):
|
|
||||||
log.info(f"Beginning dialogue training on {dialogue_file}")
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
tagger = PosLemmaTagger(language=self.chatbot.storage.tagger.language)
|
|
||||||
|
|
||||||
with open(extracted_dir / dialogue_file, "r", encoding="utf-8") as dg:
|
|
||||||
reader = csv.DictReader(dg)
|
|
||||||
|
|
||||||
next(reader) # Skip the header
|
|
||||||
|
|
||||||
last_dialogue_id = None
|
|
||||||
previous_statement_text = None
|
|
||||||
previous_statement_search_text = ""
|
|
||||||
statements_from_file = []
|
|
||||||
|
|
||||||
save_every = 50
|
|
||||||
count = 0
|
|
||||||
|
|
||||||
async for row in AsyncIter(reader):
|
|
||||||
dialogue_id = row["dialogueID"]
|
|
||||||
if dialogue_id != last_dialogue_id:
|
|
||||||
previous_statement_text = None
|
|
||||||
previous_statement_search_text = ""
|
|
||||||
last_dialogue_id = dialogue_id
|
|
||||||
count += 1
|
|
||||||
if count >= save_every:
|
|
||||||
if statements_from_file:
|
|
||||||
self.chatbot.storage.create_many(statements_from_file)
|
|
||||||
statements_from_file = []
|
|
||||||
count = 0
|
|
||||||
|
|
||||||
if len(row) > 0:
|
|
||||||
statement = Statement(
|
|
||||||
text=row["text"],
|
|
||||||
in_response_to=previous_statement_text,
|
|
||||||
conversation="training",
|
|
||||||
# created_at=date_parser.parse(row["date"]),
|
|
||||||
persona=row["from"],
|
|
||||||
)
|
|
||||||
|
|
||||||
for preprocessor in self.chatbot.preprocessors:
|
|
||||||
statement = preprocessor(statement)
|
|
||||||
|
|
||||||
statement.search_text = tagger.get_text_index_string(statement.text)
|
|
||||||
statement.search_in_response_to = previous_statement_search_text
|
|
||||||
|
|
||||||
previous_statement_text = statement.text
|
|
||||||
previous_statement_search_text = statement.search_text
|
|
||||||
|
|
||||||
statements_from_file.append(statement)
|
|
||||||
|
|
||||||
if statements_from_file:
|
|
||||||
self.chatbot.storage.create_many(statements_from_file)
|
|
||||||
|
|
||||||
log.info(f"Training took {time.time() - start_time} seconds.")
|
|
||||||
|
|
||||||
|
|
||||||
class TwitterCorpusTrainer(Trainer):
|
|
||||||
pass
|
|
||||||
# def train(self, *args, **kwargs):
|
|
||||||
# """
|
|
||||||
# Train the chat bot based on the provided list of
|
|
||||||
# statements that represents a single conversation.
|
|
||||||
# """
|
|
||||||
# import twint
|
|
||||||
#
|
|
||||||
# c = twint.Config()
|
|
||||||
# c.__dict__.update(kwargs)
|
|
||||||
# twint.run.Search(c)
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# previous_statement_text = None
|
|
||||||
# previous_statement_search_text = ''
|
|
||||||
#
|
|
||||||
# statements_to_create = []
|
|
||||||
#
|
|
||||||
# for conversation_count, text in enumerate(conversation):
|
|
||||||
# if self.show_training_progress:
|
|
||||||
# utils.print_progress_bar(
|
|
||||||
# 'List Trainer',
|
|
||||||
# conversation_count + 1, len(conversation)
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# statement_search_text = self.chatbot.storage.tagger.get_text_index_string(text)
|
|
||||||
#
|
|
||||||
# statement = self.get_preprocessed_statement(
|
|
||||||
# Statement(
|
|
||||||
# text=text,
|
|
||||||
# search_text=statement_search_text,
|
|
||||||
# in_response_to=previous_statement_text,
|
|
||||||
# search_in_response_to=previous_statement_search_text,
|
|
||||||
# conversation='training'
|
|
||||||
# )
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# previous_statement_text = statement.text
|
|
||||||
# previous_statement_search_text = statement_search_text
|
|
||||||
#
|
|
||||||
# statements_to_create.append(statement)
|
|
||||||
#
|
|
||||||
# self.chatbot.storage.create_many(statements_to_create)
|
|
Before Width: | Height: | Size: 4.6 MiB |
Before Width: | Height: | Size: 144 KiB |
@ -1,15 +0,0 @@
|
|||||||
from redbot.core import data_manager
|
|
||||||
|
|
||||||
from .conquest import Conquest
|
|
||||||
from .mapmaker import MapMaker
|
|
||||||
|
|
||||||
|
|
||||||
async def setup(bot):
|
|
||||||
cog = Conquest(bot)
|
|
||||||
data_manager.bundled_data_path(cog)
|
|
||||||
await cog.load_data()
|
|
||||||
|
|
||||||
bot.add_cog(cog)
|
|
||||||
|
|
||||||
cog2 = MapMaker(bot)
|
|
||||||
bot.add_cog(cog2)
|
|
@ -1,422 +0,0 @@
|
|||||||
import asyncio
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import pathlib
|
|
||||||
from abc import ABC
|
|
||||||
from shutil import copyfile
|
|
||||||
from typing import Optional
|
|
||||||
|
|
||||||
import discord
|
|
||||||
from PIL import Image, ImageChops, ImageColor, ImageOps
|
|
||||||
from discord.ext.commands import Greedy
|
|
||||||
from redbot.core import Config, commands
|
|
||||||
from redbot.core.bot import Red
|
|
||||||
from redbot.core.data_manager import bundled_data_path, cog_data_path
|
|
||||||
|
|
||||||
log = logging.getLogger("red.fox_v3.conquest")
|
|
||||||
|
|
||||||
|
|
||||||
class Conquest(commands.Cog):
|
|
||||||
"""
|
|
||||||
Cog for
|
|
||||||
"""
|
|
||||||
|
|
||||||
default_zoom_json = {"enabled": False, "x": -1, "y": -1, "zoom": 1.0}
|
|
||||||
|
|
||||||
def __init__(self, bot: Red):
|
|
||||||
super().__init__()
|
|
||||||
self.bot = bot
|
|
||||||
self.config = Config.get_conf(
|
|
||||||
self, identifier=67111110113117101115116, force_registration=True
|
|
||||||
)
|
|
||||||
|
|
||||||
default_guild = {}
|
|
||||||
default_global = {"current_map": None}
|
|
||||||
self.config.register_guild(**default_guild)
|
|
||||||
self.config.register_global(**default_global)
|
|
||||||
|
|
||||||
self.data_path: pathlib.Path = cog_data_path(self)
|
|
||||||
self.asset_path: Optional[pathlib.Path] = None
|
|
||||||
|
|
||||||
self.current_map = None
|
|
||||||
self.map_data = None
|
|
||||||
self.ext = None
|
|
||||||
self.ext_format = None
|
|
||||||
|
|
||||||
async def red_delete_data_for_user(self, **kwargs):
|
|
||||||
"""Nothing to delete"""
|
|
||||||
return
|
|
||||||
|
|
||||||
async def load_data(self):
|
|
||||||
"""
|
|
||||||
Initial loading of data from bundled_data_path and config
|
|
||||||
"""
|
|
||||||
self.asset_path = bundled_data_path(self) / "assets"
|
|
||||||
self.current_map = await self.config.current_map()
|
|
||||||
|
|
||||||
if self.current_map:
|
|
||||||
if not await self.current_map_load():
|
|
||||||
await self.config.current_map.clear()
|
|
||||||
|
|
||||||
async def current_map_load(self):
|
|
||||||
map_data_path = self.asset_path / self.current_map / "data.json"
|
|
||||||
if not map_data_path.exists():
|
|
||||||
log.warning(f"{map_data_path} does not exist. Clearing current map")
|
|
||||||
return False
|
|
||||||
|
|
||||||
with map_data_path.open() as mapdata:
|
|
||||||
self.map_data: dict = json.load(mapdata)
|
|
||||||
self.ext = self.map_data["extension"]
|
|
||||||
self.ext_format = "JPEG" if self.ext.upper() == "JPG" else self.ext.upper()
|
|
||||||
return True
|
|
||||||
|
|
||||||
@commands.group()
|
|
||||||
async def conquest(self, ctx: commands.Context):
|
|
||||||
"""
|
|
||||||
Base command for conquest cog. Start with `[p]conquest set map` to select a map.
|
|
||||||
"""
|
|
||||||
if ctx.invoked_subcommand is None and self.current_map is not None:
|
|
||||||
await self._conquest_current(ctx)
|
|
||||||
|
|
||||||
@conquest.command(name="list")
|
|
||||||
async def _conquest_list(self, ctx: commands.Context):
|
|
||||||
"""
|
|
||||||
List currently available maps
|
|
||||||
"""
|
|
||||||
maps_json = self.asset_path / "maps.json"
|
|
||||||
|
|
||||||
with maps_json.open() as maps:
|
|
||||||
maps_json = json.load(maps)
|
|
||||||
map_list = "\n".join(maps_json["maps"])
|
|
||||||
await ctx.maybe_send_embed(f"Current maps:\n{map_list}")
|
|
||||||
|
|
||||||
@conquest.group(name="set")
|
|
||||||
async def conquest_set(self, ctx: commands.Context):
|
|
||||||
"""Base command for admin actions like selecting a map"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@conquest_set.command(name="resetzoom")
|
|
||||||
async def _conquest_set_resetzoom(self, ctx: commands.Context):
|
|
||||||
"""Resets the zoom level of the current map"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
zoom_json_path = self.data_path / self.current_map / "settings.json"
|
|
||||||
if not zoom_json_path.exists():
|
|
||||||
await ctx.maybe_send_embed(
|
|
||||||
f"No zoom data found for {self.current_map}, reset not needed"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
with zoom_json_path.open("w+") as zoom_json:
|
|
||||||
json.dump({"enabled": False}, zoom_json)
|
|
||||||
|
|
||||||
await ctx.tick()
|
|
||||||
|
|
||||||
@conquest_set.command(name="zoom")
|
|
||||||
async def _conquest_set_zoom(self, ctx: commands.Context, x: int, y: int, zoom: float):
|
|
||||||
"""
|
|
||||||
Set the zoom level and position of the current map
|
|
||||||
|
|
||||||
x: positive integer
|
|
||||||
y: positive integer
|
|
||||||
zoom: float greater than or equal to 1
|
|
||||||
"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
if x < 0 or y < 0 or zoom < 1:
|
|
||||||
await ctx.send_help()
|
|
||||||
return
|
|
||||||
|
|
||||||
zoom_json_path = self.data_path / self.current_map / "settings.json"
|
|
||||||
|
|
||||||
zoom_data = self.default_zoom_json.copy()
|
|
||||||
zoom_data["enabled"] = True
|
|
||||||
zoom_data["x"] = x
|
|
||||||
zoom_data["y"] = y
|
|
||||||
zoom_data["zoom"] = zoom
|
|
||||||
|
|
||||||
with zoom_json_path.open("w+") as zoom_json:
|
|
||||||
json.dump(zoom_data, zoom_json)
|
|
||||||
|
|
||||||
await ctx.tick()
|
|
||||||
|
|
||||||
@conquest_set.command(name="zoomtest")
|
|
||||||
async def _conquest_set_zoomtest(self, ctx: commands.Context, x: int, y: int, zoom: float):
|
|
||||||
"""
|
|
||||||
Test the zoom level and position of the current map
|
|
||||||
|
|
||||||
x: positive integer
|
|
||||||
y: positive integer
|
|
||||||
zoom: float greater than or equal to 1
|
|
||||||
"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
if x < 0 or y < 0 or zoom < 1:
|
|
||||||
await ctx.send_help()
|
|
||||||
return
|
|
||||||
|
|
||||||
zoomed_path = await self._create_zoomed_map(
|
|
||||||
self.data_path / self.current_map / f"current.{self.ext}", x, y, zoom
|
|
||||||
)
|
|
||||||
|
|
||||||
await ctx.send(
|
|
||||||
file=discord.File(
|
|
||||||
fp=zoomed_path,
|
|
||||||
filename=f"current_zoomed.{self.ext}",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
async def _create_zoomed_map(self, map_path, x, y, zoom, **kwargs):
|
|
||||||
current_map = Image.open(map_path)
|
|
||||||
|
|
||||||
w, h = current_map.size
|
|
||||||
zoom2 = zoom * 2
|
|
||||||
zoomed_map = current_map.crop((x - w / zoom2, y - h / zoom2, x + w / zoom2, y + h / zoom2))
|
|
||||||
# zoomed_map = zoomed_map.resize((w, h), Image.LANCZOS)
|
|
||||||
zoomed_map.save(self.data_path / self.current_map / f"zoomed.{self.ext}", self.ext_format)
|
|
||||||
return self.data_path / self.current_map / f"zoomed.{self.ext}"
|
|
||||||
|
|
||||||
@conquest_set.command(name="save")
|
|
||||||
async def _conquest_set_save(self, ctx: commands.Context, *, save_name):
|
|
||||||
"""Save the current map to be loaded later"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
current_map_folder = self.data_path / self.current_map
|
|
||||||
current_map = current_map_folder / f"current.{self.ext}"
|
|
||||||
|
|
||||||
if not current_map_folder.exists() or not current_map.exists():
|
|
||||||
await ctx.maybe_send_embed("Current map doesn't exist! Try setting a new one")
|
|
||||||
return
|
|
||||||
|
|
||||||
copyfile(current_map, current_map_folder / f"{save_name}.{self.ext}")
|
|
||||||
await ctx.tick()
|
|
||||||
|
|
||||||
@conquest_set.command(name="load")
|
|
||||||
async def _conquest_set_load(self, ctx: commands.Context, *, save_name):
|
|
||||||
"""Load a saved map to be the current map"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
current_map_folder = self.data_path / self.current_map
|
|
||||||
current_map = current_map_folder / f"current.{self.ext}"
|
|
||||||
saved_map = current_map_folder / f"{save_name}.{self.ext}"
|
|
||||||
|
|
||||||
if not current_map_folder.exists() or not saved_map.exists():
|
|
||||||
await ctx.maybe_send_embed(f"Saved map not found in the {self.current_map} folder")
|
|
||||||
return
|
|
||||||
|
|
||||||
copyfile(saved_map, current_map)
|
|
||||||
await ctx.tick()
|
|
||||||
|
|
||||||
@conquest_set.command(name="map")
|
|
||||||
async def _conquest_set_map(self, ctx: commands.Context, mapname: str, reset: bool = False):
|
|
||||||
"""
|
|
||||||
Select a map from current available maps
|
|
||||||
|
|
||||||
To add more maps, see the guide (WIP)
|
|
||||||
"""
|
|
||||||
map_dir = self.asset_path / mapname
|
|
||||||
if not map_dir.exists() or not map_dir.is_dir():
|
|
||||||
await ctx.maybe_send_embed(
|
|
||||||
f"Map `{mapname}` was not found in the {self.asset_path} directory"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
self.current_map = mapname
|
|
||||||
await self.config.current_map.set(self.current_map) # Save to config too
|
|
||||||
|
|
||||||
await self.current_map_load()
|
|
||||||
|
|
||||||
# map_data_path = self.asset_path / mapname / "data.json"
|
|
||||||
# with map_data_path.open() as mapdata:
|
|
||||||
# self.map_data = json.load(mapdata)
|
|
||||||
#
|
|
||||||
# self.ext = self.map_data["extension"]
|
|
||||||
|
|
||||||
current_map_folder = self.data_path / self.current_map
|
|
||||||
current_map = current_map_folder / f"current.{self.ext}"
|
|
||||||
|
|
||||||
if not reset and current_map.exists():
|
|
||||||
await ctx.maybe_send_embed(
|
|
||||||
"This map is already in progress, resuming from last game\n"
|
|
||||||
"Use `[p]conquest set map [mapname] True` to start a new game"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
if not current_map_folder.exists():
|
|
||||||
os.makedirs(current_map_folder)
|
|
||||||
copyfile(self.asset_path / mapname / f"blank.{self.ext}", current_map)
|
|
||||||
|
|
||||||
await ctx.tick()
|
|
||||||
|
|
||||||
@conquest.command(name="current")
|
|
||||||
async def _conquest_current(self, ctx: commands.Context):
|
|
||||||
"""
|
|
||||||
Send the current map.
|
|
||||||
"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
current_img = self.data_path / self.current_map / f"current.{self.ext}"
|
|
||||||
|
|
||||||
await self._send_maybe_zoomed_map(ctx, current_img, f"current_map.{self.ext}")
|
|
||||||
|
|
||||||
async def _send_maybe_zoomed_map(self, ctx, map_path, filename):
|
|
||||||
zoom_data = {"enabled": False}
|
|
||||||
|
|
||||||
zoom_json_path = self.data_path / self.current_map / "settings.json"
|
|
||||||
|
|
||||||
if zoom_json_path.exists():
|
|
||||||
with zoom_json_path.open() as zoom_json:
|
|
||||||
zoom_data = json.load(zoom_json)
|
|
||||||
|
|
||||||
if zoom_data["enabled"]:
|
|
||||||
map_path = await self._create_zoomed_map(map_path, **zoom_data)
|
|
||||||
|
|
||||||
await ctx.send(file=discord.File(fp=map_path, filename=filename))
|
|
||||||
|
|
||||||
@conquest.command("blank")
|
|
||||||
async def _conquest_blank(self, ctx: commands.Context):
|
|
||||||
"""
|
|
||||||
Print the blank version of the current map, for reference.
|
|
||||||
"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
current_blank_img = self.asset_path / self.current_map / f"blank.{self.ext}"
|
|
||||||
|
|
||||||
await self._send_maybe_zoomed_map(ctx, current_blank_img, f"blank_map.{self.ext}")
|
|
||||||
|
|
||||||
@conquest.command("numbered")
|
|
||||||
async def _conquest_numbered(self, ctx: commands.Context):
|
|
||||||
"""
|
|
||||||
Print the numbered version of the current map, for reference.
|
|
||||||
"""
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
numbers_path = self.asset_path / self.current_map / f"numbers.{self.ext}"
|
|
||||||
if not numbers_path.exists():
|
|
||||||
await ctx.send(
|
|
||||||
file=discord.File(
|
|
||||||
fp=self.asset_path / self.current_map / f"numbered.{self.ext}",
|
|
||||||
filename=f"numbered.{self.ext}",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
current_map = Image.open(self.data_path / self.current_map / f"current.{self.ext}")
|
|
||||||
numbers = Image.open(numbers_path).convert("L")
|
|
||||||
|
|
||||||
inverted_map = ImageOps.invert(current_map)
|
|
||||||
|
|
||||||
loop = asyncio.get_running_loop()
|
|
||||||
current_numbered_img = await loop.run_in_executor(
|
|
||||||
None, Image.composite, current_map, inverted_map, numbers
|
|
||||||
)
|
|
||||||
|
|
||||||
current_numbered_img.save(
|
|
||||||
self.data_path / self.current_map / f"current_numbered.{self.ext}", self.ext_format
|
|
||||||
)
|
|
||||||
|
|
||||||
await self._send_maybe_zoomed_map(
|
|
||||||
ctx,
|
|
||||||
self.data_path / self.current_map / f"current_numbered.{self.ext}",
|
|
||||||
f"current_numbered.{self.ext}",
|
|
||||||
)
|
|
||||||
|
|
||||||
@conquest.command(name="multitake")
|
|
||||||
async def _conquest_multitake(
|
|
||||||
self, ctx: commands.Context, start_region: int, end_region: int, color: str
|
|
||||||
):
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
color = ImageColor.getrgb(color)
|
|
||||||
except ValueError:
|
|
||||||
await ctx.maybe_send_embed(f"Invalid color {color}")
|
|
||||||
return
|
|
||||||
|
|
||||||
if end_region > self.map_data["region_max"] or start_region < 1:
|
|
||||||
await ctx.maybe_send_embed(
|
|
||||||
f"Max region number is {self.map_data['region_max']}, minimum is 1"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
regions = [r for r in range(start_region, end_region + 1)]
|
|
||||||
|
|
||||||
await self._process_take_regions(color, ctx, regions)
|
|
||||||
|
|
||||||
async def _process_take_regions(self, color, ctx, regions):
|
|
||||||
current_img_path = self.data_path / self.current_map / f"current.{self.ext}"
|
|
||||||
im = Image.open(current_img_path)
|
|
||||||
async with ctx.typing():
|
|
||||||
out: Image.Image = await self._composite_regions(im, regions, color)
|
|
||||||
out.save(current_img_path, self.ext_format)
|
|
||||||
await self._send_maybe_zoomed_map(ctx, current_img_path, f"map.{self.ext}")
|
|
||||||
|
|
||||||
@conquest.command(name="take")
|
|
||||||
async def _conquest_take(self, ctx: commands.Context, regions: Greedy[int], *, color: str):
|
|
||||||
"""
|
|
||||||
Claim a territory or list of territories for a specified color
|
|
||||||
|
|
||||||
:param regions: List of integer regions
|
|
||||||
:param color: Color to claim regions
|
|
||||||
"""
|
|
||||||
if not regions:
|
|
||||||
await ctx.send_help()
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.current_map is None:
|
|
||||||
await ctx.maybe_send_embed("No map is currently set. See `[p]conquest set map`")
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
color = ImageColor.getrgb(color)
|
|
||||||
except ValueError:
|
|
||||||
await ctx.maybe_send_embed(f"Invalid color {color}")
|
|
||||||
return
|
|
||||||
|
|
||||||
for region in regions:
|
|
||||||
if region > self.map_data["region_max"] or region < 1:
|
|
||||||
await ctx.maybe_send_embed(
|
|
||||||
f"Max region number is {self.map_data['region_max']}, minimum is 1"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
await self._process_take_regions(color, ctx, regions)
|
|
||||||
|
|
||||||
async def _composite_regions(self, im, regions, color) -> Image.Image:
|
|
||||||
im2 = Image.new("RGB", im.size, color)
|
|
||||||
|
|
||||||
loop = asyncio.get_running_loop()
|
|
||||||
|
|
||||||
combined_mask = None
|
|
||||||
for region in regions:
|
|
||||||
mask = Image.open(
|
|
||||||
self.asset_path / self.current_map / "masks" / f"{region}.{self.ext}"
|
|
||||||
).convert("L")
|
|
||||||
if combined_mask is None:
|
|
||||||
combined_mask = mask
|
|
||||||
else:
|
|
||||||
# combined_mask = ImageChops.logical_or(combined_mask, mask)
|
|
||||||
combined_mask = await loop.run_in_executor(
|
|
||||||
None, ImageChops.multiply, combined_mask, mask
|
|
||||||
)
|
|
||||||
|
|
||||||
out = await loop.run_in_executor(None, Image.composite, im, im2, combined_mask)
|
|
||||||
|
|
||||||
return out
|
|
Before Width: | Height: | Size: 400 KiB |
@ -1,3 +0,0 @@
|
|||||||
{
|
|
||||||
"region_max": 70
|
|
||||||
}
|
|
Before Width: | Height: | Size: 480 KiB |
Before Width: | Height: | Size: 345 KiB |
@ -1,3 +0,0 @@
|
|||||||
{
|
|
||||||
"region_max": 70
|
|
||||||
}
|
|
Before Width: | Height: | Size: 413 KiB |
@ -1,7 +0,0 @@
|
|||||||
{
|
|
||||||
"maps": [
|
|
||||||
"simple",
|
|
||||||
"ck2",
|
|
||||||
"HoI"
|
|
||||||
]
|
|
||||||
}
|
|
Before Width: | Height: | Size: 312 KiB |
@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"region_max": 70,
|
|
||||||
"extension": "jpg"
|
|
||||||
}
|
|
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 27 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 29 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 56 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 26 KiB |