Skip to content

Evaluation Issues #129

@YHao29

Description

@YHao29

Hello, we appreciate your wonderful work!

But when I repreduce the evaluation part, I met some problems that make me confused.

First, here's my configurations:

export LEADERBOARD_ROOT=leaderboard
export CHALLENGE_TRACK_CODENAME=SENSORS
export PORT=$PT # same as the carla server port
export TM_PORT=$(($PT+500)) # port for traffic manager, required when spawning multiple servers/clients
export DEBUG_CHALLENGE=0
export REPETITIONS=1 # multiple evaluation runs
export ROUTES=langauto/benchmark_long.xml
export TEAM_AGENT=leaderboard/team_code/lmdriver_agent.py # agent
export TEAM_CONFIG=leaderboard/team_code/lmdriver_config.py # model checkpoint, not required for expert
export CHECKPOINT_ENDPOINT=results/lmdrive_result.json # results file
#export SCENARIOS=leaderboard/data/scenarios/no_scenarios.json #town05_all_scenarios.json
export SCENARIOS=leaderboard/data/official/all_towns_traffic_scenarios_public.json
export SAVE_PATH=data/eval # path for saving episodes while evaluating
export RESUME=True
    # Controller
    turn_KP = 1.25
    turn_KI = 0.75
    turn_KD = 0.3
    turn_n = 40  # buffer size

    speed_KP = 5.0
    speed_KI = 0.5
    speed_KD = 1.0
    speed_n = 40  # buffer size

    max_throttle = 0.75  # upper limit on throttle signal value in dataset
    brake_speed = 0.1  # desired speed below which brake is triggered
    brake_ratio = 1.1  # ratio of speed to desired speed at which brake is triggered
    clip_delta = 0.35  # maximum change in speed input to logitudinal controller

    llm_model = '/media/flingroup/3b136437-409e-409f-ab01-afb915b70726/model_ckp/llava-v1.5-7b'
    preception_model = 'memfuser_baseline_e1d3_return_feature'
    preception_model_ckpt = '/media/flingroup/3b136437-409e-409f-ab01-afb915b70726/model_ckp/vision-encoder-r50.pth.tar'
    lmdrive_ckpt = '/media/flingroup/3b136437-409e-409f-ab01-afb915b70726/model_ckp/llava-v1.5-checkpoint.pth'

    agent_use_notice = False
    sample_rate = 2

Then I run the ./leaderboard/scripts/run_evaluation.sh. But something goes wrong and it's stuck here (see the figures):

Image

And the carla server shows like this:

Image

I waited about an hour and it didn't continue. Do you have any idea about this? We really need your help. Thanks so much!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions