Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

insert my custom informations to reply to my costumers #31

Closed
ademir10 opened this issue Mar 21, 2024 · 36 comments
Closed

insert my custom informations to reply to my costumers #31

ademir10 opened this issue Mar 21, 2024 · 36 comments
Assignees
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers question Further information is requested

Comments

@ademir10
Copy link

Hello!
Im trying to check if is possible to let your project like this one:
https://www.youtube.com/watch?v=Sh94c6yn5aQ

It seem like you have almost everything done, im just trying to insert my custom informations like this guy did and after this interact with my whatsapp Business. is it possible?

i've created my .env file and also connected with my whatsapp through qrcode generated, so in my terminal i have this message:
QR has been generated! | Scan QR Code with you're mobile.
✔ User Authenticated!
✔ Client is ready | All set!
after that, i dont know what more can i do to reach this level above sent.

thank you so much!

@Zain-ul-din
Copy link
Owner

see how to create a custom model? also consider checking this issue

@Zain-ul-din Zain-ul-din self-assigned this Mar 22, 2024
@Zain-ul-din Zain-ul-din added documentation Improvements or additions to documentation good first issue Good for newcomers question Further information is requested labels Mar 22, 2024
@ademir10
Copy link
Author

see how to create a custom model? also consider checking this issue

thank you my dear friend, now i did what the doc said, i just dont understand how say that, i'm using GEMINI, where i config it?

because after to do all changes like you said, when i try to send a message using my custom model, i received this message in my Prompt:
✔ QR has been generated! | Scan QR Code with you're mobile.
✔ User Authenticated!
✔ Client is ready | All set!
✖ CustomModel request fail | An error occur, at CustomModel.ts sendMessage(prompt, msg) err: Error: OpenAI error 401: {
"error": {
"message": "Incorrect API key provided: ADD_YOUR_KEY. You can find your API key at https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}

It seems like it trying to use OpenIA instead Gemini..
This is my config:

/* Models config files */
import { Config } from './types/Config';

const config: Config = {
chatGPTModel: "gpt-3.5-turbo", // learn more about GPT models https://platform.openai.com/docs/models
models: {
ChatGPT: {
prefix: '!chatgpt', // Prefix for the ChatGPT model
enable: true // Whether the ChatGPT model is enabled or not
},
DALLE: {
prefix: '!dalle', // Prefix for the DALLE model
enable: true // Whether the DALLE model is enabled or not
},
StableDiffusion: {
prefix: '!stable', // Prefix for the StableDiffusion model
enable: true // Whether the StableDiffusion model is enabled or not
},
GeminiVision: {
prefix: '!gemini-vision', // Prefix for the GeminiVision model
enable: true // Whether the GeminiVision model is enabled or not
},
Gemini: {
prefix: '!gemini', // Prefix for the Gemini model
enable: true // Whether the Gemini model is enabled or not
},
Custom: [
{
/** Custom Model /
modelName: 'whatsapp-respostas', // Name of the custom model
prefix: '!get', // Prefix for the custom model
enable: true, // Whether the custom model is enabled or not
/
*
* context: "file-path (.txt, .text, .md)",
* context: "text url",
* context: "text"
/
context: './static/whatsapp-respostas.md', // Context for the custom model
}
]
},
enablePrefix: {
/
* if enable, reply to those messages start with prefix /
enable: true, // Whether prefix messages are enabled or not
/
* default model to use if message not starts with prefix and enable is false */
defaultModel: 'Gemini' // Default model to use if no prefix is present in the message
}
};

export default config;

@ademir10
Copy link
Author

see how to create a custom model? also consider checking this issue

thank you my dear friend, now i did what the doc said, i just dont understand how say that, i'm using GEMINI, where i config it?

because after to do all changes like you said, when i try to send a message using my custom model, i received this message in my Prompt: ✔ QR has been generated! | Scan QR Code with you're mobile. ✔ User Authenticated! ✔ Client is ready | All set! ✖ CustomModel request fail | An error occur, at CustomModel.ts sendMessage(prompt, msg) err: Error: OpenAI error 401: { "error": { "message": "Incorrect API key provided: ADD_YOUR_KEY. You can find your API key at https://platform.openai.com/account/api-keys.", "type": "invalid_request_error", "param": null, "code": "invalid_api_key" } }

It seems like it trying to use OpenIA instead Gemini.. This is my config:

/* Models config files */ import { Config } from './types/Config';

const config: Config = { chatGPTModel: "gpt-3.5-turbo", // learn more about GPT models https://platform.openai.com/docs/models models: { ChatGPT: { prefix: '!chatgpt', // Prefix for the ChatGPT model enable: true // Whether the ChatGPT model is enabled or not }, DALLE: { prefix: '!dalle', // Prefix for the DALLE model enable: true // Whether the DALLE model is enabled or not }, StableDiffusion: { prefix: '!stable', // Prefix for the StableDiffusion model enable: true // Whether the StableDiffusion model is enabled or not }, GeminiVision: { prefix: '!gemini-vision', // Prefix for the GeminiVision model enable: true // Whether the GeminiVision model is enabled or not }, Gemini: { prefix: '!gemini', // Prefix for the Gemini model enable: true // Whether the Gemini model is enabled or not }, Custom: [ { /** Custom Model / modelName: 'whatsapp-respostas', // Name of the custom model prefix: '!get', // Prefix for the custom model enable: true, // Whether the custom model is enabled or not /* * context: "file-path (.txt, .text, .md)", * context: "text url", * context: "text" / context: './static/whatsapp-respostas.md', // Context for the custom model } ] }, enablePrefix: { /* if enable, reply to those messages start with prefix / enable: true, // Whether prefix messages are enabled or not /* default model to use if message not starts with prefix and enable is false */ defaultModel: 'Gemini' // Default model to use if no prefix is present in the message } };

export default config;

even using Gemini AI i need to have a OpenAI account too? or just with Gemini its possible to make it works?

@ademir10
Copy link
Author

see how to create a custom model? also consider checking this issue

thank you my dear friend, now i did what the doc said, i just dont understand how say that, i'm using GEMINI, where i config it?
because after to do all changes like you said, when i try to send a message using my custom model, i received this message in my Prompt: ✔ QR has been generated! | Scan QR Code with you're mobile. ✔ User Authenticated! ✔ Client is ready | All set! ✖ CustomModel request fail | An error occur, at CustomModel.ts sendMessage(prompt, msg) err: Error: OpenAI error 401: { "error": { "message": "Incorrect API key provided: ADD_YOUR_KEY. You can find your API key at https://platform.openai.com/account/api-keys.", "type": "invalid_request_error", "param": null, "code": "invalid_api_key" } }
It seems like it trying to use OpenIA instead Gemini.. This is my config:
/* Models config files / import { Config } from './types/Config';
const config: Config = { chatGPTModel: "gpt-3.5-turbo", // learn more about GPT models https://platform.openai.com/docs/models models: { ChatGPT: { prefix: '!chatgpt', // Prefix for the ChatGPT model enable: true // Whether the ChatGPT model is enabled or not }, DALLE: { prefix: '!dalle', // Prefix for the DALLE model enable: true // Whether the DALLE model is enabled or not }, StableDiffusion: { prefix: '!stable', // Prefix for the StableDiffusion model enable: true // Whether the StableDiffusion model is enabled or not }, GeminiVision: { prefix: '!gemini-vision', // Prefix for the GeminiVision model enable: true // Whether the GeminiVision model is enabled or not }, Gemini: { prefix: '!gemini', // Prefix for the Gemini model enable: true // Whether the Gemini model is enabled or not }, Custom: [ { /
* Custom Model / modelName: 'whatsapp-respostas', // Name of the custom model prefix: '!get', // Prefix for the custom model enable: true, // Whether the custom model is enabled or not /* * context: "file-path (.txt, .text, .md)", * context: "text url", * context: "text" / context: './static/whatsapp-respostas.md', // Context for the custom model } ] }, enablePrefix: { /* if enable, reply to those messages start with prefix / enable: true, // Whether prefix messages are enabled or not /* default model to use if message not starts with prefix and enable is false */ defaultModel: 'Gemini' // Default model to use if no prefix is present in the message } };
export default config;

even using Gemini AI i need to have a OpenAI account too? or just with Gemini its possible to make it works?

actually my custom model is not working with my custom prefix "!get"

Custom: [
{
/** Custom Model /
modelName: 'whatsapp-respostas', // Name of the custom model
prefix: '!get', // Prefix for the custom model
enable: true, // Whether the custom model is enabled or not
/
*
* context: "file-path (.txt, .text, .md)",
* context: "text url",
* context: "text"
*/
context: './static/whatsapp-respostas.md', // Context for the custom model
}
]
},
enablePrefix: {
defaultModel: 'Gemini' // Default model to use if no prefix is present in the message
}

if i start a chat using my custom prefix !get i receive the message about AI key because it try to use OpenAI that's default:
Error: OpenAI error 401: { "error": { "message": "Incorrect API key provided: ADD_YOUR_KEY.

It happens even if i config my defaultModel: 'Gemini' like above...
What i need to do to use my Custom Model using my custom prefix?

And i'd like know how to let my custom model enabled all the time even if i don't start a chat using the prefix? is it possible?

thank you so much!

@Zain-ul-din
Copy link
Owner

I just added a new feature so you can specify which model to use.

Usage:

  • download the code again and add modelToUse key as shown in image below

image

@Zain-ul-din Zain-ul-din added the enhancement New feature or request label Mar 22, 2024
@ademir10
Copy link
Author

I just added a new feature so you can specify which model to use.

Usage:

  • download the code again and add modelToUse key as shown in image below

image

thank you so much my dear friend! let me check!!!!

@ademir10
Copy link
Author

I just added a new feature so you can specify which model to use.
Usage:

  • download the code again and add modelToUse key as shown in image below

image

thank you so much my dear friend! let me check!!!!

done! working as expected!! thank you so much!
One question:
do you know if is possible to add something like:
await sendMessagesWithDelay({
automatic message when we dont receive messages for N seconds
}, 15000)

something like: "Do you need something more?" just to appear like a human is there, something more natural..

@ademir10
Copy link
Author

how to keep using my custom model without to use prefix?
because when i disable the prefix i cant use my custom model informations:

Custom: [
{
/** Custom Model /
modelName: 'whatsapp-respostas', // Name of the custom model
prefix: '!bot', // Prefix for the custom model
enable: true, // Whether the custom model is enabled or not
/
*
* context: "file-path (.txt, .text, .md)",
* context: "text url",
* context: "text"
/
modelToUse: 'Gemini',
context: './static/whatsapp-respostas.txt', // Context for the custom model
}
]
},
enablePrefix: {
/
* if enable, reply to those messages start with prefix /
enable: false, // Whether prefix messages are enabled or not (DISABLED HERE BUT I NEED TO KEEP CONSULTING MY MODEL INFORMATIONS)
/
* default model to use if message not starts with prefix and enable is false */
defaultModel: 'Gemini' // Default model to use if no prefix is present in the message
}
};

@Zain-ul-din
Copy link
Owner

good to hear. star this repo and help us reach 128 stars.

One question:
do you know if is possible to add something like:
await sendMessagesWithDelay({
automatic message when we dont receive messages for N seconds
}, 15000)
something like: "Do you need something more?" just to appear like a human is there, something more natural..

It is possible to send a message if our system remains idle for N number of seconds. But I'm not going to add this to the master branch you can implement it on your side.

try this in src/lib/WhatsAppClient.ts

image

@ademir10
Copy link
Author

good to hear. star this repo and help us reach 128 stars.

One question:
do you know if is possible to add something like:
await sendMessagesWithDelay({
automatic message when we dont receive messages for N seconds
}, 15000)
something like: "Do you need something more?" just to appear like a human is there, something more natural..

It is possible to send a message if our system remains idle for N number of seconds. But I'm not going to add this to the master branch you can implement it on your side.

try this in src/lib/WhatsAppClient.ts

image

let me try! thanks!
And about to use my custom model without to use a prefix?? is it possible?

@Zain-ul-din
Copy link
Owner

Zain-ul-din commented Mar 22, 2024

currently, it is not supported. Maybe I'll add it later keep this issue open

Appendix:

[How to send whatsapp message in whatsapp-web.js](https://stackoverflow.com/questions/65157125/how-to-send-whatsapp-message-in-whatsapp-web-js)

@ademir10
Copy link
Author

currently, it is not supported. Maybe I'll add it later keep this issue open

do it please! because when we use a comercial whatsapp number for this purpose, we always will have people asking for questions about the company, so it make sense.

while you dont implement this possibility, can i replace all messages received and replace the content for something like:
Message.replace(message, "!bot" + Message);
and this way they dont will need to add the "!bot" before to send the message.

is it possible? where can i do it?

@ademir10
Copy link
Author

ademir10 commented Mar 22, 2024

good to hear. star this repo and help us reach 128 stars.

One question:
do you know if is possible to add something like:
await sendMessagesWithDelay({
automatic message when we dont receive messages for N seconds
}, 15000)
something like: "Do you need something more?" just to appear like a human is there, something more natural..

It is possible to send a message if our system remains idle for N number of seconds. But I'm not going to add this to the master branch you can implement it on your side.

try this in src/lib/WhatsAppClient.ts

image

trying to do what you said:

private async onMessage(message: Message) {
const msgStr = message.body;

    if (msgStr.length == 0) return;
    
    const modelToUse = Util.getModelByPrefix(msgStr) as AiModels;

    setInterval(async ()=> {
        //check if there is some message in 15seconds
        const chatId = (await message.getChat()).id.user;
        this.client.sendMessage(chatId, "Podemos te ajudar em algo mais?")
    }, 15000);

    // media
    if(message.hasMedia) {
        
        if(
            modelToUse === undefined ||
            this.aiModels.get(modelToUse)?.modelType !== "Image"
        ) return;

when i start a chat, after to receive the first message i receive this error:
/Users/ademir/zai/node_modules/puppeteer/lib/cjs/puppeteer/common/ExecutionContext.js:221
throw new Error('Evaluation failed: ' + helper_js_1.helper.getExceptionMessage(exceptionDetails));
^

Error: Evaluation failed: Error: wid error: invalid wid
at e (https://web.whatsapp.com/:2:4911)
at new f (https://web.whatsapp.com/app.a325b87cf6fdeb29465c.js:306:201624)
at Object.c [as createWid] (https://web.whatsapp.com/app.a325b87cf6fdeb29465c.js:306:207925)
at puppeteer_evaluation_script:2:53
at ExecutionContext._evaluateInternal (/Users/ademir/zai/node_modules/puppeteer/src/common/ExecutionContext.ts:273:13)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at ExecutionContext.evaluate (/Users/ademir/zai/node_modules/puppeteer/src/common/ExecutionContext.ts:140:12)
at Client.sendMessage (/Users/ademir/zai/node_modules/whatsapp-web.js/src/Client.js:888:28)

Node.js v21.0.0
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

**Something about chatId:
Error: Evaluation failed: Error: wid error: invalid wid

@Zain-ul-din
Copy link
Owner

do it please! because when we use a comercial whatsapp number for this purpose, we always will have people asking for questions about the company, so it make sense.

DONE! Please keep one thing in mind if you're making money out of this bot then you should also support this project since this project is open source.

@ademir10
Copy link
Author

do it please! because when we use a comercial whatsapp number for this purpose, we always will have people asking for questions about the company, so it make sense.

DONE! Please keep one thing in mind if you're making money out of this bot then you should also support this project since this project is open source.

yep! for sure! i just trying to check what possibilities when we think about real scenario.
i'm just testing it here in my company to check what really is possible to do.

We already have people doing that to make money like the link above i sent for you.

The last question: hahaha
Do you know if its possible to integrate or connect the custom model to check answers in a Database for example?
Thinking now about a high level its gonna be amazing!

@Zain-ul-din
Copy link
Owner

The last question

i didn't understand what you meant by connecting the custom model to DB. if you are asking for loading context from the database then yes it is possible by adding a URL of your content.

In this case it would be,

context: "https://raw.githubusercontent.com/Zain-ul-din/whatsapp-ai-bot/master/static/whatsapp-ai-bot.md"

image

@Zain-ul-din
Copy link
Owner

I just saw your profile are you the CEO of the company?

@ademir10
Copy link
Author

I just saw your profile are you the CEO of the company?

yep! but my company just have one employee, me! its not a big company!
I have some apps built and some WebApps nothing more.

@ademir10
Copy link
Author

do it please! because when we use a comercial whatsapp number for this purpose, we always will have people asking for questions about the company, so it make sense.

DONE! Please keep one thing in mind if you're making money out of this bot then you should also support this project since this project is open source.

how to use without prefix in a custom model? i'm trying to do it this way but it kill the application:

Custom: [
{
/** Custom Model /
modelName: 'whatsapp-respostas', // Name of the custom model
prefix: '!', // Prefix for the custom model HERE A EMPTY STRING but no way..
enable: true, // Whether the custom model is enabled or not
/
*
* context: "file-path (.txt, .text, .md)",
* context: "text url",
* context: "text"
/
modelToUse: 'Gemini',
context: './static/whatsapp-respostas.txt', // Context for the custom model
}
]
},
enablePrefix: {
/
* if enable, reply to those messages start with prefix /
enable: true, // Whether prefix messages are enabled or not
/
* default model to use if message not starts with prefix and enable is false */
defaultModel: 'Gemini' // Default model to use if no prefix is present in the message
}
};

@Zain-ul-din
Copy link
Owner

I just saw your profile are you the CEO of the company?

yep! but my company just have one employee, me! its not a big company! I have some apps built and some WebApps nothing more.

:-) let me know if you need my help I'm a student doing open-source work on the GitHub

@Zain-ul-din
Copy link
Owner

Zain-ul-din commented Mar 22, 2024

do it please! because when we use a comercial whatsapp number for this purpose, we always will have people asking for questions about the company, so it make sense.

DONE! Please keep one thing in mind if you're making money out of this bot then you should also support this project since this project is open source.

how to use without prefix in a custom model? i'm trying to do it this way but it kill the application:

Custom: [ { /** Custom Model / modelName: 'whatsapp-respostas', // Name of the custom model prefix: '!', // Prefix for the custom model HERE A EMPTY STRING but no way.. enable: true, // Whether the custom model is enabled or not /* * context: "file-path (.txt, .text, .md)", * context: "text url", * context: "text" / modelToUse: 'Gemini', context: './static/whatsapp-respostas.txt', // Context for the custom model } ] }, enablePrefix: { /* if enable, reply to those messages start with prefix / enable: true, // Whether prefix messages are enabled or not /* default model to use if message not starts with prefix and enable is false */ defaultModel: 'Gemini' // Default model to use if no prefix is present in the message } };

first of all, install latest code and set enablePrefix to false

- don't use empty space as prefix

@ademir10
Copy link
Author

enablePrefix

let me check!

@ademir10
Copy link
Author

ademir10 commented Mar 22, 2024

enablePrefix

still not working..
when i disable the enablePrefix it start to use Gemini and not my custom model.

I can send messages without prefix but it does not work trying to find the answers in my custom mode

Custom: [
{
/** Custom Model */
modelName: 'whatsapp-respostas', // Name of the custom model
prefix: '!bot', // Prefix for the custom model
enable: true, // Whether the custom model is enabled or not
modelToUse: 'Gemini',
context: './static/whatsapp-respostas.txt', // Context for the custom model
}
]
},
enablePrefix: {
enable: false, FALSE HERE DISABLE MY CUSTOM MODEL
defaultModel: 'Gemini' // Default model to use if no prefix is present in the message
}
};

@Zain-ul-din
Copy link
Owner

wait let me check

@Zain-ul-din
Copy link
Owner

working for me. share your config file here.

image

@ademir10
Copy link
Author

ademir10 commented Mar 22, 2024

/* Models config files */
import { Config } from './types/Config';

const config: Config = {
    chatGPTModel: "gpt-3.5-turbo", // learn more about GPT models https://platform.openai.com/docs/models
    models: {
        ChatGPT: {
            prefix: '!chatgpt', // Prefix for the ChatGPT model
            enable: true // Whether the ChatGPT model is enabled or not
        },
        DALLE: {
            prefix: '!dalle', // Prefix for the DALLE model
            enable: true // Whether the DALLE model is enabled or not
        },
        StableDiffusion: {
            prefix: '!stable', // Prefix for the StableDiffusion model
            enable: true // Whether the StableDiffusion model is enabled or not
        },
        GeminiVision: {
            prefix: '!gemini-vision', // Prefix for the GeminiVision model
            enable: true // Whether the GeminiVision model is enabled or not
        },
        Gemini: {
            prefix: '!gemini', // Prefix for the Gemini model
            enable: true // Whether the Gemini model is enabled or not
        },
        Custom: [
            {
                /** Custom Model */
                modelName: 'whatsapp-respostas', // Name of the custom model
                prefix: '!bot', // Prefix for the custom model
                enable: true, // Whether the custom model is enabled or not
                /**
                    * context: "file-path (.txt, .text, .md)",
                    * context: "text url",
                    * context: "text"
                  */
                modelToUse: 'Gemini',
                context: './static/whatsapp-respostas.txt', // Context for the custom model
            }
        ]
    },
    enablePrefix: {
        /** if enable, reply to those messages start with prefix  */
        enable: false, // Whether prefix messages are enabled or not
        /** default model to use if message not starts with prefix and enable is false  */
        defaultModel: 'Gemini' // Default model to use if no prefix is present in the message
    }
};

export default config;

@ademir10
Copy link
Author

working for me. share your config file here.

image

here still not working..

@Zain-ul-din
Copy link
Owner

oh I see in the default model to use you need to write 'Custom'

image

@ademir10
Copy link
Author

fixed my dear friend!
now its working as expected, i'll keep making some tests and back to give my feedbacks, its possible to integrate many good things here! nice job!

@ademir10
Copy link
Author

good to hear. star this repo and help us reach 128 stars.

One question:
do you know if is possible to add something like:
await sendMessagesWithDelay({
automatic message when we dont receive messages for N seconds
}, 15000)
something like: "Do you need something more?" just to appear like a human is there, something more natural..

It is possible to send a message if our system remains idle for N number of seconds. But I'm not going to add this to the master branch you can implement it on your side.
try this in src/lib/WhatsAppClient.ts
image

trying to do what you said:

private async onMessage(message: Message) { const msgStr = message.body;

    if (msgStr.length == 0) return;
    
    const modelToUse = Util.getModelByPrefix(msgStr) as AiModels;

    setInterval(async ()=> {
        //check if there is some message in 15seconds
        const chatId = (await message.getChat()).id.user;
        this.client.sendMessage(chatId, "Podemos te ajudar em algo mais?")
    }, 15000);

    // media
    if(message.hasMedia) {
        
        if(
            modelToUse === undefined ||
            this.aiModels.get(modelToUse)?.modelType !== "Image"
        ) return;

when i start a chat, after to receive the first message i receive this error: /Users/ademir/zai/node_modules/puppeteer/lib/cjs/puppeteer/common/ExecutionContext.js:221 throw new Error('Evaluation failed: ' + helper_js_1.helper.getExceptionMessage(exceptionDetails)); ^

Error: Evaluation failed: Error: wid error: invalid wid at e (https://web.whatsapp.com/:2:4911) at new f (https://web.whatsapp.com/app.a325b87cf6fdeb29465c.js:306:201624) at Object.c [as createWid] (https://web.whatsapp.com/app.a325b87cf6fdeb29465c.js:306:207925) at puppeteer_evaluation_script:2:53 at ExecutionContext._evaluateInternal (/Users/ademir/zai/node_modules/puppeteer/src/common/ExecutionContext.ts:273:13) at processTicksAndRejections (node:internal/process/task_queues:95:5) at ExecutionContext.evaluate (/Users/ademir/zai/node_modules/puppeteer/src/common/ExecutionContext.ts:140:12) at Client.sendMessage (/Users/ademir/zai/node_modules/whatsapp-web.js/src/Client.js:888:28)

Node.js v21.0.0 error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

**Something about chatId: Error: Evaluation failed: Error: wid error: invalid wid

did you see something about this error? i tried to work with your example but no way...

@Zain-ul-din
Copy link
Owner

fixed my dear friend! now its working as expected, i'll keep making some tests and back to give my feedbacks, its possible to integrate many good things here! nice job!

welcome if you need any kind of help let me know

@Zain-ul-din
Copy link
Owner

did you see something about this error? i tried to work with your example but no way...

i think this code will not work remove it for now I'll add this feature may be after few days still I'm busy with university FYP(Final Year Project)

@ademir10
Copy link
Author

fixed my dear friend! now its working as expected, i'll keep making some tests and back to give my feedbacks, its possible to integrate many good things here! nice job!

welcome if you need any kind of help let me know

I dont know if is possible for you, but i'm thinking we have a meeting to talk about some possibilities together.
are you working there? let me know!

@ademir10
Copy link
Author

did you see something about this error? i tried to work with your example but no way...

i think this code will not work remove it for now I'll add this feature may be after few days still I'm busy with university FYP(Final Year Project)

its okay! take your time my friend!

@Zain-ul-din
Copy link
Owner

fixed my dear friend! now its working as expected, i'll keep making some tests and back to give my feedbacks, its possible to integrate many good things here! nice job!

welcome if you need any kind of help let me know

I dont know if is possible for you, but i'm thinking we have a meeting to talk about some possibilities together. are you working there? let me know!

talking about what?

@ademir10
Copy link
Author

talking about what?

we build one tool to be integrated in one of my WebApplications, its about job.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants