Skip to content

duix-guiji/duix-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 

Repository files navigation

Overview

DUIX digital humans use TTS, ASR, digital human cloning and other technologies to perform virtual simulations of humans to create highly anthropomorphic and interactive virtual digital images. Digital humans can listen and understand like humans, and communicate with users using emotional language, expressions and actions, making digital humans more humane. Users can have conversations with digital people through mobile phones, computers, all-in-ones, projection or PAD, and digital people communicate with users in a human way. DUIX provides digital human interaction PASS platform capabilities to empower enterprises and ecology. The DUIX platform is highly open and has a complete ecological integration system. NLP, intelligent voice, digital human image, etc. all support open integration. At the same time, DUIX provides a highly abstract digital human interaction SDK, which is easy to integrate and has high terminal compatibility, making enterprises pay more attention to the business itself.

Install

# Install Duix
npm i duix-guiji -S

Quick start

import DUIX from 'duix-guiji'

// 通过jwt获取token
let request = new XMLHttpRequest()
request.onreadystatechange = () => {
    if (request.readyState === 4) {
        if (request.status === 200) {
            const token = JSON.parse(request.responseText).data
            init(token)
        }
    }
}

// appid为数字人的robotCode
request.open('GET', `https://${youJWTAjaxUrl}?appId=xxxxxxxxxx`)
request.send()

const init = token => {
    const duix = new DUIX({
        url: 'https://robot.guiji.ai/duix-cc/',
        logger: 'error',
        container: document.querySelector('.stage'),
        robot: {
            token,
            code: 'xxxxxxxxxxxxxxxxxx'
        }
    })

    duix.on('load', () => {
        console.log('load')
    })

    duix.on('canplaythrough', function (e) {
        console.log('canplaythrough')
        duix.play()
    })

    duix.on('error', function (e) {
        console.error('error', e)
    })

    duix.on('bodyload', function (e) {
        duix.say('你好,我是硅基智能数字人,很高兴认识您。') // 用文字驱动数字人说话
        // duix.say('https://duix.guiji.ai/nfs/ccm-file/0c710466e703224167ead95f1fa6ef58.wav', true) // 用音频文件驱动数字人说话
    })
}

Option

The new DUIX(options) in the above example can get a DUIX instance, where option is a configuration object, the details are as follows:

Key Type Description Default Example
container Element The digital person will be rendered in this Dom, occupying the width and height of the container. document.querySelector('#duix')
logger boolean|string Log level. Optional : false|'debug'|'info'|'warn'|'error' false false
url string Server URL, the service URL you obtained from the DUIX backend. https://api.us.guiji.ai
faceCache object Buffer related configuration
faceCache.duration number Buffer duration, in seconds. The buffer can be appropriately increased on a machine with a poorer CPU. When the buffer is full, the canplaythrough event will be triggered. 1
robot object Digital human related configuration
robot.code string Digital Human Code. Obtained from the official website background. 205692370051410784
robot.token string Digital human Token. Use the popular cross-domain authentication solution JWT to obtain tokens based on robot.code and secret. Please see below for the demo generated by the Java version.
quality object Picture quality related configuration. Digital human decoding occupies some CPU. In scenarios where the CPU is weak, the image quality can be adjusted to increase the decoding speed.
quality.fps number FPS. Optional value: 15|20|25
quality.isQuarter boolean Whether to reduce the resolution. When the value is true, the screen resolution is reduced to 270*480 false
body object Digital human silent video related configuration
body.autoplay boolean Whether to automatically play when the silent video is loaded, if set to false, you can call the duix.playSilence() method to actively play after the bodyload event is triggered true
// Java 版JWT生成代码
public class JwtUtils {
    /**
     * 创建jwt
     *
     * @param robotCode 签发对象
     * @param secret    密钥
     * @param exp       到期时间:单位秒
     * @return
     */
    public static String createJwt(String robotCode, String secret, int exp) {
        Calendar nowTime = Calendar.getInstance();
        nowTime.add(Calendar.SECOND, exp);
        Date expiresDate = nowTime.getTime();

        //发行时间减10s,解决检验jwt时防止服务器时间不同步而导致的校验失败
        Calendar issuedCalendar = Calendar.getInstance();
        issuedCalendar.add(Calendar.SECOND, -10);
        return JWT.create()
                //发行时间
                .withIssuedAt(issuedCalendar.getTime())
                //有效时间
                .withExpiresAt(expiresDate)
                //载荷
                .withClaim("robotCode", robotCode)
                //加密
                .sign(Algorithm.HMAC256(secret));
    }
}

Method

DUIX(options) Constructor

See the above table for parameters

say(words,[isVoice = false])

Drive digital people to speak, support text drive and audio file drive. After calling this method, the resource will be loaded immediately, and the canplaythrough event will be triggered when the buffer is full. When calling this method, make sure that the silent video of the digital person has been loaded, that is, the bodyload event has been triggered.

参数:
words

What you need to say by a digital person can be text, such as "Hello, I am a silicon-based smart digital person, and I am glad to meet you."; it can also be an audio URL, such as https://www.xxxx.com/cdn /abcd.wav.

isVoice

Optional parameter, indicating whether the words parameter is an audio. true means it is an audio, the default is false.

playSilence()

Play silent video. When options.body.autoplay = false, use this method to play silent videos.

play()

To play digital humans, this method is generally called in the canplaythrough event.

pause()

Suspend digital people. After the pause, the digital person does not speak, and the screen switches to a silent video.

resume()

Resume playback from pause.

stop()

Stop the video.

on(eventname,callback)

Listen for events.

参数:
eventname

The name of the event, see the table below for details.

callback

Callback.

getCanvas()

Get the internal canvas and do some advanced development on the canvas.

getAudioContext()

Obtain the interrogation audio, and do some audio visualization based on this.

Event

名称 描述
bodyload Silent video loading is complete. Only after this event can duix.say(xxx) be called.
bodyprogress Silent video loading progress, this event can be used to make a loading animation.
canplaythrough The buffer is full and you can start playing.
load The speech content of this round of digital people has been loaded. Digital human resources are lazily loaded, and this event may never be triggered if it is not played (duix.play()).
play The digital person starts to play.
pause Digital human playback has been paused.
timeupdate The digital person is playing, and this event is triggered when every frame is played.
ended The number of playback ends.
error DUIX is abnormal.

Version:

0.0.45 (not yet released to npm)

  1. Solve the problem of multiple triggers of monitor screen events
  2. Solve the problem that the face of the digital person will continue to play after the stop after the pause
  3. Solve the problem that resume audio will continue to play after digital people stop
  4. Solve the problem that when the playback is paused, the playback will start directly when you switch to the background and then come back.

0.0.44

  1. Add authentication function in major version
  2. Optimize the test code to facilitate testing
  3. Optimized some bugs

0.0.43 1. Added the method getAudioDest to get MediaStream from AudioContext

0.0.42

  1. Request.js => getArrayBuffer to add a method to actively disconnect the request
  2. DigitalHuman.js => _sayVoice Add return when judging network cancellation
  3. DigitalHuman.js => stop Add the cancel method to prevent the network request from being successful after the stop and the stop failure

0.0.41

  1. Request.js add axios timeout
  2. Request.js => getArrayBuffer to add audio request failed to return && DigitalHuman.js => _sayVoice add judgment Call event && DUIX.js => add new event when audio request fails audioFailed event when the audio request fails

0.0.40

  1. Fix the bug DigitalHuman.js line:166 & 169 event name error causes wsClose wsError not to be triggered
  2. Modify the webpack configuration to output the SDK version once by default, which is convenient for debugging in development and production environments

0.0.39

  1. Added pause and resume methods.
  2. Fix the occasional swallowing phenomenon.
  3. The puase event is no longer triggered when the playback is over, only the ended event is triggered.
  4. New function, pause the picture and sound when the page is not visible, and resume playing when the page is visible.

0.0.38

  1. Fix the occasional bug that the load is stuck and cannot be played after calling the say method.
  2. New features. When options.body.autoplay=false, calling say does not automatically play silently.

0.0.37

  1. Added the getCanvas() method.
  2. Added the getAudioContext() method.

0.0.36

  1. Modify the startup method, now the system is in the form of ip, and the test can be accessed normally on the phone
  2. Modify AIFace.js reconnectInterval to 1 to enable reconnect after disconnection
  3. Bug fix AIFace.js line:48 close => onClose

0.0.34:

  1. Added wsClose event, AIFace connection close event.
  2. Added wsError event, AIFace connection error event.

0.0.33:

  1. Switch between forward/reverse playback of silent video to solve the problem of silent video jumping when the silent video is not connected to the end (such as a Jordanian male).
  2. Delete some debug logs.
  3. Fix the bug that does not trigger the load event

0.0.32

  1. Fix the bug that the canplaythrough event cannot be triggered when the audio is too short.

0.0.31

  1. Further optimize the client-side buffering strategy to reduce the memory usage. The memory usage of the Jordan model is now stable at around 700M.
  2. Fix some bugs

0.0.30

  1. Modify the client's buffer strategy to reduce the client's memory usage.
  2. New configuration options.body.autoplay is used to control whether the silent video will automatically play after loading. The default is true, if set to false, you can call duix.playSilence() method to play actively after the bodyload event is triggered.
  3. Optimize the TTS caching scheme, now the cache can be kept longer.

0.0.27

  1. Added configuration body.autoplay to control whether the body is automatically played after loading.
  2. To delete the code for real-time texture mapping, the buffer must be used, and the buffer size can be set to 0.
  3. The default buffer strategy is modified to auto, and the buffer size is predicted from the half-second loading speed of the first face loading.
  4. Adjust the decoding interval time to reduce the instantaneous consumption of the CPU, and solve the problem of forced refresh of the page caused by the excessively high instantaneous CPU usage of some mobile phones.

0.0.26

  1. Fix the error when quality.fps and quality.quarter are not transmitted.

  2. Added bodyprocess event to notify body loading progress.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors